Shortcuts

Calibration Error

Module Interface

class torchmetrics.CalibrationError(task: Optional[Literal['binary', 'multiclass']] = None, n_bins: int = 15, norm: Literal['l1', 'l2', 'max'] = 'l1', num_classes: Optional[int] = None, ignore_index: Optional[int] = None, validate_args: bool = True, **kwargs: Any)[source]

Computes the Top-label Calibration Error. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary' or 'multiclass'. See the documentation of BinaryCalibrationError and MulticlassCalibrationError for the specific details of each argument influence and examples.

BinaryCalibrationError

class torchmetrics.classification.BinaryCalibrationError(n_bins=15, norm='l1', ignore_index=None, validate_args=True, **kwargs)[source]

Computes the Top-label Calibration Error for binary tasks. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

Parameters
  • n_bins (int) – Number of bins to use when computing the metric.

  • norm (Literal[‘l1’, ‘l2’, ‘max’]) – Norm used to compare empirical and expected probability bins.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics.classification import BinaryCalibrationError
>>> preds = torch.tensor([0.25, 0.25, 0.55, 0.75, 0.75])
>>> target = torch.tensor([0, 0, 1, 1, 1])
>>> metric = BinaryCalibrationError(n_bins=2, norm='l1')
>>> metric(preds, target)
tensor(0.2900)
>>> metric = BinaryCalibrationError(n_bins=2, norm='l2')
>>> metric(preds, target)
tensor(0.2918)
>>> metric = BinaryCalibrationError(n_bins=2, norm='max')
>>> metric(preds, target)
tensor(0.3167)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Return type

Tensor

update(preds, target)[source]

Override this method to update the state variables of your metric class.

Return type

None

MulticlassCalibrationError

class torchmetrics.classification.MulticlassCalibrationError(num_classes, n_bins=15, norm='l1', ignore_index=None, validate_args=True, **kwargs)[source]

Computes the Top-label Calibration Error for multiclass tasks. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

Parameters
  • num_classes (int) – Integer specifing the number of classes

  • n_bins (int) – Number of bins to use when computing the metric.

  • norm (Literal[‘l1’, ‘l2’, ‘max’]) – Norm used to compare empirical and expected probability bins.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics.classification import MulticlassCalibrationError
>>> preds = torch.tensor([[0.25, 0.20, 0.55],
...                       [0.55, 0.05, 0.40],
...                       [0.10, 0.30, 0.60],
...                       [0.90, 0.05, 0.05]])
>>> target = torch.tensor([0, 1, 2, 0])
>>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l1')
>>> metric(preds, target)
tensor(0.2000)
>>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='l2')
>>> metric(preds, target)
tensor(0.2082)
>>> metric = MulticlassCalibrationError(num_classes=3, n_bins=3, norm='max')
>>> metric(preds, target)
tensor(0.2333)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Return type

Tensor

update(preds, target)[source]

Override this method to update the state variables of your metric class.

Return type

None

Functional Interface

torchmetrics.functional.calibration_error(preds, target, task=None, n_bins=15, norm='l1', num_classes=None, ignore_index=None, validate_args=True)[source]

Computes the Top-label Calibration Error. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary' or 'multiclass'. See the documentation of binary_calibration_error() and multiclass_calibration_error() for the specific details of each argument influence and examples.

Return type

Tensor

binary_calibration_error

torchmetrics.functional.classification.binary_calibration_error(preds, target, n_bins=15, norm='l1', ignore_index=None, validate_args=True)[source]

Computes the Top-label Calibration Error for binary tasks. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • n_bins (int) – Number of bins to use when computing the metric.

  • norm (Literal[‘l1’, ‘l2’, ‘max’]) – Norm used to compare empirical and expected probability bins.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Example

>>> from torchmetrics.functional.classification import binary_calibration_error
>>> preds = torch.tensor([0.25, 0.25, 0.55, 0.75, 0.75])
>>> target = torch.tensor([0, 0, 1, 1, 1])
>>> binary_calibration_error(preds, target, n_bins=2, norm='l1')
tensor(0.2900)
>>> binary_calibration_error(preds, target, n_bins=2, norm='l2')
tensor(0.2918)
>>> binary_calibration_error(preds, target, n_bins=2, norm='max')
tensor(0.3167)
Return type

Tensor

multiclass_calibration_error

torchmetrics.functional.classification.multiclass_calibration_error(preds, target, num_classes, n_bins=15, norm='l1', ignore_index=None, validate_args=True)[source]

Computes the Top-label Calibration Error for multiclass tasks. The expected calibration error can be used to quantify how well a given model is calibrated e.g. how well the predicted output probabilities of the model matches the actual probabilities of the ground truth distribution.

Three different norms are implemented, each corresponding to variations on the calibration error metric.

\text{ECE} = \sum_i^N b_i \|(p_i - c_i)\|, \text{L1 norm (Expected Calibration Error)}

\text{MCE} =  \max_{i} (p_i - c_i), \text{Infinity norm (Maximum Calibration Error)}

\text{RMSCE} = \sqrt{\sum_i^N b_i(p_i - c_i)^2}, \text{L2 norm (Root Mean Square Calibration Error)}

Where p_i is the top-1 prediction accuracy in bin i, c_i is the average confidence of predictions in bin i, and b_i is the fraction of data points in bin i. Bins are constructed in an uniform way in the [0,1] range.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_classes (int) – Integer specifing the number of classes

  • n_bins (int) – Number of bins to use when computing the metric.

  • norm (Literal[‘l1’, ‘l2’, ‘max’]) – Norm used to compare empirical and expected probability bins.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Example

>>> from torchmetrics.functional.classification import multiclass_calibration_error
>>> preds = torch.tensor([[0.25, 0.20, 0.55],
...                       [0.55, 0.05, 0.40],
...                       [0.10, 0.30, 0.60],
...                       [0.90, 0.05, 0.05]])
>>> target = torch.tensor([0, 1, 2, 0])
>>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='l1')
tensor(0.2000)
>>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='l2')
tensor(0.2082)
>>> multiclass_calibration_error(preds, target, num_classes=3, n_bins=3, norm='max')
tensor(0.2333)
Return type

Tensor