Shortcuts

Average Precision

Module Interface

class torchmetrics.AveragePrecision(task: Literal['binary', 'multiclass', 'multilabel'], thresholds: Optional[Union[int, List[float], torch.Tensor]] = None, num_classes: Optional[int] = None, num_labels: Optional[int] = None, average: Optional[Literal['macro', 'weighted', 'none']] = 'macro', ignore_index: Optional[int] = None, validate_args: bool = True, **kwargs: Any)[source]

Computes the average precision (AP) score. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum_{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary', 'multiclass' or multilabel. See the documentation of BinaryAveragePrecision, MulticlassAveragePrecision and MultilabelAveragePrecision for the specific details of each argument influence and examples.

Legacy Example:
>>> pred = torch.tensor([0, 0.1, 0.8, 0.4])
>>> target = torch.tensor([0, 1, 1, 1])
>>> average_precision = AveragePrecision(task="binary")
>>> average_precision(pred, target)
tensor(1.)
>>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                      [0.05, 0.75, 0.05, 0.05, 0.05],
...                      [0.05, 0.05, 0.75, 0.05, 0.05],
...                      [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> average_precision = AveragePrecision(task="multiclass", num_classes=5, average=None)
>>> average_precision(pred, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])

BinaryAveragePrecision

class torchmetrics.classification.BinaryAveragePrecision(thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum_{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, ...) containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (Tensor): An int tensor of shape (N, ...) containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.

As output to forward and compute the metric returns the following output:

  • bap (Tensor): A single scalar with the average precision score

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters
  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics.classification import BinaryAveragePrecision
>>> preds = torch.tensor([0, 0.5, 0.7, 0.8])
>>> target = torch.tensor([0, 1, 1, 0])
>>> metric = BinaryAveragePrecision(thresholds=None)
>>> metric(preds, target)
tensor(0.5833)
>>> bap = BinaryAveragePrecision(thresholds=5)
>>> bap(preds, target)
tensor(0.6667)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

MulticlassAveragePrecision

class torchmetrics.classification.MulticlassAveragePrecision(num_classes, average='macro', thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum_{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, C, ...) containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (Tensor): An int tensor of shape (N, ...) containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

As output to forward and compute the metric returns the following output:

  • mcap (Tensor): If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”macro”|”weighted” then a single scalar is returned.

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters
  • num_classes (int) – Integer specifing the number of classes

  • average (Optional[Literal[‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over classes. Should be one of the following:

    • macro: Calculate score for each class and average them

    • weighted: Calculates score for each class and computes weighted average using their support

    • "none" or None: Calculates score for each class and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics.classification import MulticlassAveragePrecision
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> metric = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=None)
>>> metric(preds, target)
tensor(0.6250)
>>> mcap = MulticlassAveragePrecision(num_classes=5, average=None, thresholds=None)
>>> mcap(preds, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])
>>> mcap = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=5)
>>> mcap(preds, target)
tensor(0.5000)
>>> mcap = MulticlassAveragePrecision(num_classes=5, average=None, thresholds=5)
>>> mcap(preds, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500, -0.0000])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

MultilabelAveragePrecision

class torchmetrics.classification.MultilabelAveragePrecision(num_labels, average='macro', thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum_{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, C, ...) containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (Tensor): An int tensor of shape (N, C, ...) containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

As output to forward and compute the metric returns the following output:

  • mlap (Tensor): If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”micro|macro”|”weighted” then a single scalar is returned.

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters
  • num_labels (int) – Integer specifing the number of labels

  • average (Optional[Literal[‘micro’, ‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over labels. Should be one of the following:

    • micro: Sum score over all labels

    • macro: Calculate score for each label and average them

    • weighted: Calculates score for each label and computes weighted average using their support

    • "none" or None: Calculates score for each label and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics.classification import MultilabelAveragePrecision
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> metric = MultilabelAveragePrecision(num_labels=3, average="macro", thresholds=None)
>>> metric(preds, target)
tensor(0.7500)
>>> mlap = MultilabelAveragePrecision(num_labels=3, average=None, thresholds=None)
>>> mlap(preds, target)
tensor([0.7500, 0.5833, 0.9167])
>>> mlap = MultilabelAveragePrecision(num_labels=3, average="macro", thresholds=5)
>>> mlap(preds, target)
tensor(0.7778)
>>> mlap = MultilabelAveragePrecision(num_labels=3, average=None, thresholds=5)
>>> mlap(preds, target)
tensor([0.7500, 0.6667, 0.9167])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

Functional Interface

torchmetrics.functional.average_precision(preds, target, task, thresholds=None, num_classes=None, num_labels=None, average='macro', ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

This function is a simple wrapper to get the task specific versions of this metric, which is done by setting the task argument to either 'binary', 'multiclass' or multilabel. See the documentation of binary_average_precision(), multiclass_average_precision() and multilabel_average_precision() for the specific details of each argument influence and examples.

Legacy Example:
>>> from torchmetrics.functional import average_precision
>>> pred = torch.tensor([0.0, 1.0, 2.0, 3.0])
>>> target = torch.tensor([0, 1, 1, 1])
>>> average_precision(pred, target, task="binary")
tensor(1.)
>>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                      [0.05, 0.75, 0.05, 0.05, 0.05],
...                      [0.05, 0.05, 0.75, 0.05, 0.05],
...                      [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> average_precision(pred, target, task="multiclass", num_classes=5, average=None)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])
Return type

Union[List[Tensor], Tensor]

binary_average_precision

torchmetrics.functional.classification.binary_average_precision(preds, target, thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

A single scalar with the average precision score

Example

>>> from torchmetrics.functional.classification import binary_average_precision
>>> preds = torch.tensor([0, 0.5, 0.7, 0.8])
>>> target = torch.tensor([0, 1, 1, 0])
>>> binary_average_precision(preds, target, thresholds=None)
tensor(0.5833)
>>> binary_average_precision(preds, target, thresholds=5)
tensor(0.6667)

multiclass_average_precision

torchmetrics.functional.classification.multiclass_average_precision(preds, target, num_classes, average='macro', thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for multiclass tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_classes (int) – Integer specifing the number of classes

  • average (Optional[Literal[‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over classes. Should be one of the following:

    • macro: Calculate score for each class and average them

    • weighted: Calculates score for each class and computes weighted average using their support

    • "none" or None: Calculates score for each class and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.functional.classification import multiclass_average_precision
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> multiclass_average_precision(preds, target, num_classes=5, average="macro", thresholds=None)
tensor(0.6250)
>>> multiclass_average_precision(preds, target, num_classes=5, average=None, thresholds=None)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])
>>> multiclass_average_precision(preds, target, num_classes=5, average="macro", thresholds=5)
tensor(0.5000)
>>> multiclass_average_precision(preds, target, num_classes=5, average=None, thresholds=5)
tensor([1.0000, 1.0000, 0.2500, 0.2500, -0.0000])

multilabel_average_precision

torchmetrics.functional.classification.multilabel_average_precision(preds, target, num_labels, average='macro', thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for multilabel tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_labels (int) – Integer specifing the number of labels

  • average (Optional[Literal[‘micro’, ‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over labels. Should be one of the following:

    • micro: Sum score over all labels

    • macro: Calculate score for each label and average them

    • weighted: Calculates score for each label and computes weighted average using their support

    • "none" or None: Calculates score for each label and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”micro|macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.functional.classification import multilabel_average_precision
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> multilabel_average_precision(preds, target, num_labels=3, average="macro", thresholds=None)
tensor(0.7500)
>>> multilabel_average_precision(preds, target, num_labels=3, average=None, thresholds=None)
tensor([0.7500, 0.5833, 0.9167])
>>> multilabel_average_precision(preds, target, num_labels=3, average="macro", thresholds=5)
tensor(0.7778)
>>> multilabel_average_precision(preds, target, num_labels=3, average=None, thresholds=5)
tensor([0.7500, 0.6667, 0.9167])