Shortcuts

Precision At Fixed Recall

Module Interface

BinaryPrecisionAtFixedRecall

class torchmetrics.classification.BinaryPrecisionAtFixedRecall(min_recall, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible precision value given the minimum recall thresholds provided.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (Tensor): An int tensor of shape (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.

Note

Additional dimension ... will be flattened into the batch dimension.

As output to forward and compute the metric returns the following output:

  • precision (Tensor): A scalar tensor with the maximum precision for the given recall level

  • threshold (Tensor): A scalar tensor with the corresponding threshold level

Note

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters:
  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torch import tensor
>>> from torchmetrics.classification import BinaryPrecisionAtFixedRecall
>>> preds = tensor([0, 0.5, 0.7, 0.8])
>>> target = tensor([0, 1, 1, 0])
>>> metric = BinaryPrecisionAtFixedRecall(min_recall=0.5, thresholds=None)
>>> metric(preds, target)
(tensor(0.6667), tensor(0.5000))
>>> metric = BinaryPrecisionAtFixedRecall(min_recall=0.5, thresholds=5)
>>> metric(preds, target)
(tensor(0.6667), tensor(0.5000))
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure object and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> from torch import rand, randint
>>> # Example plotting a single value
>>> from torchmetrics.classification import BinaryPrecisionAtFixedRecall
>>> metric = BinaryPrecisionAtFixedRecall(min_recall=0.5)
>>> metric.update(rand(10), randint(2,(10,)))
>>> fig_, ax_ = metric.plot()  # the returned plot only shows the maximum recall value by default

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-1.png
>>> from torch import rand, randint
>>> # Example plotting multiple values
>>> from torchmetrics.classification import BinaryPrecisionAtFixedRecall
>>> metric = BinaryPrecisionAtFixedRecall(min_recall=0.5)
>>> values = [ ]
>>> for _ in range(10):
...     # we index by 0 such that only the maximum recall value is plotted
...     values.append(metric(rand(10), randint(2,(10,)))[0])
>>> fig_, ax_ = metric.plot(values)

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-2.png

MulticlassPrecisionAtFixedRecall

class torchmetrics.classification.MulticlassPrecisionAtFixedRecall(num_classes, min_recall, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible precision value given the minimum recall thresholds provided.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (Tensor): An int tensor of shape (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Note

Additional dimension ... will be flattened into the batch dimension.

As output to forward and compute the metric returns a tuple of either 2 tensors or 2 lists containing:

  • precision (Tensor): A 1d tensor of size (n_classes, ) with the maximum precision for the given recall level per class

  • threshold (Tensor): A 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Note

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters:
  • num_classes (int) – Integer specifing the number of classes

  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torch import tensor
>>> from torchmetrics.classification import MulticlassPrecisionAtFixedRecall
>>> preds = tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                 [0.05, 0.75, 0.05, 0.05, 0.05],
...                 [0.05, 0.05, 0.75, 0.05, 0.05],
...                 [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = tensor([0, 1, 3, 2])
>>> metric = MulticlassPrecisionAtFixedRecall(num_classes=5, min_recall=0.5, thresholds=None)
>>> metric(preds, target)  
(tensor([1.0000, 1.0000, 0.2500, 0.2500, 0.0000]),
 tensor([7.5000e-01, 7.5000e-01, 5.0000e-02, 5.0000e-02, 1.0000e+06]))
>>> mcrafp = MulticlassPrecisionAtFixedRecall(num_classes=5, min_recall=0.5, thresholds=5)
>>> mcrafp(preds, target)  
(tensor([1.0000, 1.0000, 0.2500, 0.2500, 0.0000]),
 tensor([7.5000e-01, 7.5000e-01, 0.0000e+00, 0.0000e+00, 1.0000e+06]))
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure object and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> from torch import rand, randint
>>> # Example plotting a single value per class
>>> from torchmetrics.classification import MulticlassPrecisionAtFixedRecall
>>> metric = MulticlassPrecisionAtFixedRecall(num_classes=3, min_recall=0.5)
>>> metric.update(rand(20, 3).softmax(dim=-1), randint(3, (20,)))
>>> fig_, ax_ = metric.plot()  # the returned plot only shows the maximum recall value by default

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-3.png
>>> from torch import rand, randint
>>> # Example plotting a multiple values per class
>>> from torchmetrics.classification import MulticlassPrecisionAtFixedRecall
>>> metric = MulticlassPrecisionAtFixedRecall(num_classes=3, min_recall=0.5)
>>> values = []
>>> for _ in range(20):
...     # we index by 0 such that only the maximum recall value is plotted
...     values.append(metric(rand(20, 3).softmax(dim=-1), randint(3, (20,)))[0])
>>> fig_, ax_ = metric.plot(values)

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-4.png

MultilabelPrecisionAtFixedRecall

class torchmetrics.classification.MultilabelPrecisionAtFixedRecall(num_labels, min_recall, thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Compute the highest possible precision value given the minimum recall thresholds provided.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (Tensor): An int tensor of shape (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.

Note

Additional dimension ... will be flattened into the batch dimension.

As output to forward and compute the metric returns a tuple of either 2 tensors or 2 lists containing:

  • precision (Tensor): A 1d tensor of size (n_classes, ) with the maximum precision for the given recall level per class

  • threshold (Tensor): A 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Note

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters:
  • num_labels (int) – Integer specifing the number of labels

  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torch import tensor
>>> from torchmetrics.classification import MultilabelPrecisionAtFixedRecall
>>> preds = tensor([[0.75, 0.05, 0.35],
...                 [0.45, 0.75, 0.05],
...                 [0.05, 0.55, 0.75],
...                 [0.05, 0.65, 0.05]])
>>> target = tensor([[1, 0, 1],
...                  [0, 0, 0],
...                  [0, 1, 1],
...                  [1, 1, 1]])
>>> metric = MultilabelPrecisionAtFixedRecall(num_labels=3, min_recall=0.5, thresholds=None)
>>> metric(preds, target)
(tensor([1.0000, 0.6667, 1.0000]), tensor([0.7500, 0.5500, 0.3500]))
>>> mlrafp = MultilabelPrecisionAtFixedRecall(num_labels=3, min_recall=0.5, thresholds=5)
>>> mlrafp(preds, target)
(tensor([1.0000, 0.6667, 1.0000]), tensor([0.7500, 0.5000, 0.2500]))
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure object and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> from torch import rand, randint
>>> # Example plotting a single value
>>> from torchmetrics.classification import MultilabelPrecisionAtFixedRecall
>>> metric = MultilabelPrecisionAtFixedRecall(num_labels=3, min_recall=0.5)
>>> metric.update(rand(20, 3), randint(2, (20, 3)))
>>> fig_, ax_ = metric.plot()  # the returned plot only shows the maximum recall value by default

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-5.png
>>> from torch import rand, randint
>>> # Example plotting multiple values
>>> from torchmetrics.classification import MultilabelPrecisionAtFixedRecall
>>> metric = MultilabelPrecisionAtFixedRecall(num_labels=3, min_recall=0.5)
>>> values = [ ]
>>> for _ in range(10):
...     # we index by 0 such that only the maximum recall value is plotted
...     values.append(metric(rand(20, 3), randint(2, (20, 3)))[0])
>>> fig_, ax_ = metric.plot(values)

(Source code, png, hires.png, pdf)

../_images/precision_at_fixed_recall-6.png

Functional Interface

binary_precision_at_fixed_recall

torchmetrics.functional.classification.binary_precision_at_fixed_recall(preds, target, min_recall, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible precision value given the minimum recall thresholds provided for binary tasks.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified). The value 1 always encodes the positive class.

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of 2 tensors containing:

  • precision: an scalar tensor with the maximum precision for the given precision level

  • threshold: an scalar tensor with the corresponding threshold level

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import binary_precision_at_fixed_recall
>>> preds = torch.tensor([0, 0.5, 0.7, 0.8])
>>> target = torch.tensor([0, 1, 1, 0])
>>> binary_precision_at_fixed_recall(preds, target, min_recall=0.5, thresholds=None)
(tensor(0.6667), tensor(0.5000))
>>> binary_precision_at_fixed_recall(preds, target, min_recall=0.5, thresholds=5)
(tensor(0.6667), tensor(0.5000))

multiclass_precision_at_fixed_recall

torchmetrics.functional.classification.multiclass_precision_at_fixed_recall(preds, target, num_classes, min_recall, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible precision value given the minimum recall thresholds provided for multiclass tasks.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_classes (int) – Integer specifing the number of classes

  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • precision: an 1d tensor of size (n_classes, ) with the maximum precision for the given recall level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import multiclass_precision_at_fixed_recall
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> multiclass_precision_at_fixed_recall(  
...     preds, target, num_classes=5, min_recall=0.5, thresholds=None)
(tensor([1.0000, 1.0000, 0.2500, 0.2500, 0.0000]),
 tensor([7.5000e-01, 7.5000e-01, 5.0000e-02, 5.0000e-02, 1.0000e+06]))
>>> multiclass_precision_at_fixed_recall(  
...     preds, target, num_classes=5, min_recall=0.5, thresholds=5)
(tensor([1.0000, 1.0000, 0.2500, 0.2500, 0.0000]),
 tensor([7.5000e-01, 7.5000e-01, 0.0000e+00, 0.0000e+00, 1.0000e+06]))

multilabel_precision_at_fixed_recall

torchmetrics.functional.classification.multilabel_precision_at_fixed_recall(preds, target, num_labels, min_recall, thresholds=None, ignore_index=None, validate_args=True)[source]

Compute the highest possible precision value given the minimum recall thresholds provided for multilabel tasks.

This is done by first calculating the precision-recall curve for different thresholds and the find the precision for a given recall level.

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters:
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_labels (int) – Integer specifing the number of labels

  • min_recall (float) – float value specifying minimum recall threshold.

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d Tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • ignore_index (Optional[int]) – Specifies a target value that is ignored and does not contribute to the metric calculation

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Returns:

a tuple of either 2 tensors or 2 lists containing

  • precision: an 1d tensor of size (n_classes, ) with the maximum precision for the given recall level per class

  • thresholds: an 1d tensor of size (n_classes, ) with the corresponding threshold level per class

Return type:

(tuple)

Example

>>> from torchmetrics.functional.classification import multilabel_precision_at_fixed_recall
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> multilabel_precision_at_fixed_recall(preds, target, num_labels=3, min_recall=0.5, thresholds=None)
(tensor([1.0000, 0.6667, 1.0000]), tensor([0.7500, 0.5500, 0.3500]))
>>> multilabel_precision_at_fixed_recall(preds, target, num_labels=3, min_recall=0.5, thresholds=5)
(tensor([1.0000, 0.6667, 1.0000]), tensor([0.7500, 0.5000, 0.2500]))