Shortcuts

mmagic.evaluation.metrics.mae

Evaluation metrics based on pixels.

Module Contents

Classes

MAE

Mean Absolute Error metric for image.

class mmagic.evaluation.metrics.mae.MAE(gt_key: str = 'gt_img', pred_key: str = 'pred_img', mask_key: Optional[str] = None, scaling=1, device='cpu', collect_device: str = 'cpu', prefix: Optional[str] = None)[source]

Bases: mmagic.evaluation.metrics.base_sample_wise_metric.BaseSampleWiseMetric

Mean Absolute Error metric for image.

mean(abs(a-b))

Parameters
  • gt_key (str) – Key of ground-truth. Default: ‘gt_img’

  • pred_key (str) – Key of prediction. Default: ‘pred_img’

  • mask_key (str, optional) – Key of mask, if mask_key is None, calculate all regions. Default: None

  • collect_device (str) – Device name used for collecting results from different ranks during distributed training. Must be ‘cpu’ or ‘gpu’. Defaults to ‘cpu’.

  • prefix (str, optional) – The prefix that will be added in the metric names to disambiguate homonymous metrics of different evaluators. If prefix is not provided in the argument, self.default_prefix will be used instead. Default: None

Metrics:
  • MAE (float): Mean of Absolute Error

metric = MAE[source]
process_image(gt, pred, mask)[source]

Process an image.

Parameters
  • gt (Tensor | np.ndarray) – GT image.

  • pred (Tensor | np.ndarray) – Pred image.

  • mask (Tensor | np.ndarray) – Mask of evaluation.

Returns

MAE result.

Return type

result (np.ndarray)

Read the Docs v: latest
Versions
latest
stable
0.x
Downloads
pdf
epub
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.