Shortcuts

Mean Absolute Percentage Error (MAPE)

Module Interface

class torchmetrics.MeanAbsolutePercentageError(**kwargs)[source]

Compute Mean Absolute Percentage Error (MAPE).

\[\text{MAPE} = \frac{1}{n}\sum_{i=1}^n\frac{| y_i - \hat{y_i} |}{\max(\epsilon, | y_i |)}\]

Where \(y\) is a tensor of target values, and \(\hat{y}\) is a tensor of predictions.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): Predictions from model

  • target (Tensor): Ground truth values

As output of forward and compute the metric returns the following output:

  • mean_abs_percentage_error (Tensor): A tensor with the mean absolute percentage error over state

Parameters:

kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Note

MAPE output is a non-negative floating point. Best result is 0.0 . But it is important to note that, bad predictions, can lead to arbitarily large values. Especially when some target values are close to 0. This MAPE implementation returns a very large number instead of inf.

Example

>>> from torch import tensor
>>> from torchmetrics.regression import MeanAbsolutePercentageError
>>> target = tensor([1, 10, 1e6])
>>> preds = tensor([0.9, 15, 1.2e6])
>>> mean_abs_percentage_error = MeanAbsolutePercentageError()
>>> mean_abs_percentage_error(preds, target)
tensor(0.2667)
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> from torch import randn
>>> # Example plotting a single value
>>> from torchmetrics.regression import MeanAbsolutePercentageError
>>> metric = MeanAbsolutePercentageError()
>>> metric.update(randn(10,), randn(10,))
>>> fig_, ax_ = metric.plot()
../_images/mean_absolute_percentage_error-1.png
>>> from torch import randn
>>> # Example plotting multiple values
>>> from torchmetrics.regression import MeanAbsolutePercentageError
>>> metric = MeanAbsolutePercentageError()
>>> values = []
>>> for _ in range(10):
...     values.append(metric(randn(10,), randn(10,)))
>>> fig, ax = metric.plot(values)
../_images/mean_absolute_percentage_error-2.png

Functional Interface

torchmetrics.functional.mean_absolute_percentage_error(preds, target)[source]

Compute mean absolute percentage error.

Parameters:
  • preds (Tensor) – estimated labels

  • target (Tensor) – ground truth labels

Return type:

Tensor

Returns:

Tensor with MAPE

Note

The epsilon value is taken from scikit-learn’s implementation of MAPE.

Example

>>> from torchmetrics.functional.regression import mean_absolute_percentage_error
>>> target = torch.tensor([1, 10, 1e6])
>>> preds = torch.tensor([0.9, 15, 1.2e6])
>>> mean_absolute_percentage_error(preds, target)
tensor(0.2667)
Read the Docs v: stable
Versions
latest
stable
v1.1.0
v1.0.3
v1.0.2
v1.0.1
v1.0.0
v0.11.4
v0.11.3
v0.11.2
v0.11.1
v0.11.0
v0.10.3
v0.10.2
v0.10.1
v0.10.0
v0.9.3
v0.9.2
v0.9.1
v0.9.0
v0.8.2
v0.8.1
v0.8.0
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.2
v0.6.1
v0.6.0
v0.5.1
v0.5.0
v0.4.1
v0.4.0
v0.3.2
v0.3.1
v0.3.0
v0.2.0
v0.1.0
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.