mmagic.evaluation.metrics.mae
¶
Evaluation metrics based on pixels.
Module Contents¶
Classes¶
Mean Absolute Error metric for image. |
- class mmagic.evaluation.metrics.mae.MAE(gt_key: str = 'gt_img', pred_key: str = 'pred_img', mask_key: Optional[str] = None, scaling=1, device='cpu', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]¶
Bases:
mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric
Mean Absolute Error metric for image.
mean(abs(a-b))
- Parameters
gt_key (str) – Key of ground-truth. Default: ‘gt_img’
pred_key (str) – Key of prediction. Default: ‘pred_img’
mask_key (str, optional) – Key of mask, if mask_key is None, calculate all regions. Default: None
collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.
prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None
- Metrics:
MAE (float): Mean of Absolute Error