Shortcuts

Average Precision

Module Interface

class torchmetrics.AveragePrecision(num_classes=None, pos_label=None, average='macro', task=None, thresholds=None, num_labels=None, ignore_index=None, validate_args=True, **kwargs)[source]

Average Precision.

Note

From v0.10 an 'binary_*', 'multiclass_*', 'multilabel_*' version now exist of each classification metric. Moving forward we recommend using these versions. This base metric will still work as it did prior to v0.10 until v0.11. From v0.11 the task argument introduced in this metric will be required and the general order of arguments may change, such that this metric will just function as an single entrypoint to calling the three specialized versions.

Computes the average precision score, which summarises the precision recall curve into one number. Works for both binary and multiclass problems. In the case of multiclass, the values will be calculated based on a one- vs-the-rest approach.

Forward accepts

  • preds (float tensor): (N, ...) (binary) or (N, C, ...) (multiclass) tensor with probabilities, where C is the number of classes.

  • target (long tensor): (N, ...) with integer labels

Parameters
  • num_classes (Optional[int]) – integer with number of classes. Not nessesary to provide for binary problems.

  • pos_label (Optional[int]) – integer determining the positive class. Default is None which for binary problem is translated to 1. For multiclass problems this argument should not be set as we iteratively change it in the range [0, num_classes-1]

  • average (Optional[Literal[‘micro’, ‘macro’, ‘weighted’, ‘none’]]) –

    defines the reduction that is applied in the case of multiclass and multilabel input. Should be one of the following:

    • 'macro' [default]: Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).

    • 'micro': Calculate the metric globally, across all samples and classes. Cannot be used with multiclass input.

    • 'weighted': Calculate the metric for each class separately, and average the metrics across classes, weighting each class by its support.

    • 'none' or None: Calculate the metric for each class separately, and return the metric for every class.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example (binary case):
>>> from torchmetrics import AveragePrecision
>>> pred = torch.tensor([0, 0.1, 0.8, 0.4])
>>> target = torch.tensor([0, 1, 1, 1])
>>> average_precision = AveragePrecision(pos_label=1)
>>> average_precision(pred, target)
tensor(1.)
Example (multiclass case):
>>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                      [0.05, 0.75, 0.05, 0.05, 0.05],
...                      [0.05, 0.05, 0.75, 0.05, 0.05],
...                      [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> average_precision = AveragePrecision(num_classes=5, average=None)
>>> average_precision(pred, target)
[tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Compute the average precision score.

Return type

Union[Tensor, List[Tensor]]

Returns

tensor with average precision. If multiclass return list of such tensors, one for each class

update(preds, target)[source]

Update state with predictions and targets.

Parameters
  • preds (Tensor) – Predictions from model

  • target (Tensor) – Ground truth values

Return type

None

BinaryAveragePrecision

class torchmetrics.classification.BinaryAveragePrecision(thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters
  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns

A single scalar with the average precision score

Example

>>> from torchmetrics.classification import BinaryAveragePrecision
>>> preds = torch.tensor([0, 0.5, 0.7, 0.8])
>>> target = torch.tensor([0, 1, 1, 0])
>>> metric = BinaryAveragePrecision(thresholds=None)
>>> metric(preds, target)
tensor(0.5833)
>>> metric = BinaryAveragePrecision(thresholds=5)
>>> metric(preds, target)
tensor(0.6667)

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Return type

Tensor

MulticlassAveragePrecision

class torchmetrics.classification.MulticlassAveragePrecision(num_classes, average='macro', thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters
  • num_classes (int) – Integer specifing the number of classes

  • average (Optional[Literal[‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over classes. Should be one of the following:

    • macro: Calculate score for each class and average them

    • weighted: Calculates score for each class and computes weighted average using their support

    • "none" or None: Calculates score for each class and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.classification import MulticlassAveragePrecision
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> metric = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=None)
>>> metric(preds, target)
tensor(0.6250)
>>> metric = MulticlassAveragePrecision(num_classes=5, average=None, thresholds=None)
>>> metric(preds, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])
>>> metric = MulticlassAveragePrecision(num_classes=5, average="macro", thresholds=5)
>>> metric(preds, target)
tensor(0.5000)
>>> metric = MulticlassAveragePrecision(num_classes=5, average=None, thresholds=5)
>>> metric(preds, target)
tensor([1.0000, 1.0000, 0.2500, 0.2500, -0.0000])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Return type

Tensor

MultilabelAveragePrecision

class torchmetrics.classification.MultilabelAveragePrecision(num_labels, average='macro', thresholds=None, ignore_index=None, validate_args=True, **kwargs)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters
  • num_labels (int) – Integer specifing the number of labels

  • average (Optional[Literal[‘micro’, ‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over labels. Should be one of the following:

    • micro: Sum score over all labels

    • macro: Calculate score for each label and average them

    • weighted: Calculates score for each label and computes weighted average using their support

    • "none" or None: Calculates score for each label and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”micro|macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.classification import MultilabelAveragePrecision
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> metric = MultilabelAveragePrecision(num_labels=3, average="macro", thresholds=None)
>>> metric(preds, target)
tensor(0.7500)
>>> metric = MultilabelAveragePrecision(num_labels=3, average=None, thresholds=None)
>>> metric(preds, target)
tensor([0.7500, 0.5833, 0.9167])
>>> metric = MultilabelAveragePrecision(num_labels=3, average="macro", thresholds=5)
>>> metric(preds, target)
tensor(0.7778)
>>> metric = MultilabelAveragePrecision(num_labels=3, average=None, thresholds=5)
>>> metric(preds, target)
tensor([0.7500, 0.6667, 0.9167])

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Override this method to compute the final metric value from state variables synchronized across the distributed backend.

Return type

Tensor

Functional Interface

torchmetrics.functional.average_precision(preds, target, num_classes=None, pos_label=None, average='macro', task=None, thresholds=None, num_labels=None, ignore_index=None, validate_args=True)[source]

Average precision.

Note

From v0.10 an 'binary_*', 'multiclass_*', 'multilabel_*' version now exist of each classification metric. Moving forward we recommend using these versions. This base metric will still work as it did prior to v0.10 until v0.11. From v0.11 the task argument introduced in this metric will be required and the general order of arguments may change, such that this metric will just function as an single entrypoint to calling the three specialized versions.

Computes the average precision score.

Parameters
  • preds (Tensor) – predictions from model (logits or probabilities)

  • target (Tensor) – ground truth values

  • num_classes (Optional[int]) – integer with number of classes. Not nessesary to provide for binary problems.

  • pos_label (Optional[int]) – integer determining the positive class. Default is None which for binary problem is translated to 1. For multiclass problems his argument should not be set as we iteratively change it in the range [0, num_classes-1]

  • average (Optional[Literal[‘macro’, ‘weighted’, ‘none’]]) –

    defines the reduction that is applied in the case of multiclass and multilabel input. Should be one of the following:

    • 'macro' [default]: Calculate the metric for each class separately, and average the metrics across classes (with equal weights for each class).

    • 'weighted': Calculate the metric for each class separately, and average the metrics across classes, weighting each class by its support.

    • 'none' or None: Calculate the metric for each class separately, and return the metric for every class.

Return type

Union[List[Tensor], Tensor]

Returns

tensor with average precision. If multiclass it returns list of such tensors, one for each class

Example (binary case):
>>> from torchmetrics.functional import average_precision
>>> pred = torch.tensor([0, 1, 2, 3])
>>> target = torch.tensor([0, 1, 1, 1])
>>> average_precision(pred, target, pos_label=1)
tensor(1.)
Example (multiclass case):
>>> pred = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                      [0.05, 0.75, 0.05, 0.05, 0.05],
...                      [0.05, 0.05, 0.75, 0.05, 0.05],
...                      [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> average_precision(pred, target, num_classes=5, average=None)
[tensor(1.), tensor(1.), tensor(0.2500), tensor(0.2500), tensor(nan)]

binary_average_precision

torchmetrics.functional.classification.binary_average_precision(preds, target, thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

A single scalar with the average precision score

Example

>>> from torchmetrics.functional.classification import binary_average_precision
>>> preds = torch.tensor([0, 0.5, 0.7, 0.8])
>>> target = torch.tensor([0, 1, 1, 0])
>>> binary_average_precision(preds, target, thresholds=None)
tensor(0.5833)
>>> binary_average_precision(preds, target, thresholds=5)
tensor(0.6667)

multiclass_average_precision

torchmetrics.functional.classification.multiclass_average_precision(preds, target, num_classes, average='macro', thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply softmax per sample.

  • target (int tensor): (N, ...). Target should be a tensor containing ground truth labels, and therefore only contain values in the [0, n_classes-1] range (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{classes}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_classes (int) – Integer specifing the number of classes

  • average (Optional[Literal[‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over classes. Should be one of the following:

    • macro: Calculate score for each class and average them

    • weighted: Calculates score for each class and computes weighted average using their support

    • "none" or None: Calculates score for each class and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.functional.classification import multiclass_average_precision
>>> preds = torch.tensor([[0.75, 0.05, 0.05, 0.05, 0.05],
...                       [0.05, 0.75, 0.05, 0.05, 0.05],
...                       [0.05, 0.05, 0.75, 0.05, 0.05],
...                       [0.05, 0.05, 0.05, 0.75, 0.05]])
>>> target = torch.tensor([0, 1, 3, 2])
>>> multiclass_average_precision(preds, target, num_classes=5, average="macro", thresholds=None)
tensor(0.6250)
>>> multiclass_average_precision(preds, target, num_classes=5, average=None, thresholds=None)
tensor([1.0000, 1.0000, 0.2500, 0.2500,    nan])
>>> multiclass_average_precision(preds, target, num_classes=5, average="macro", thresholds=5)
tensor(0.5000)
>>> multiclass_average_precision(preds, target, num_classes=5, average=None, thresholds=5)
tensor([1.0000, 1.0000, 0.2500, 0.2500, -0.0000])

multilabel_average_precision

torchmetrics.functional.classification.multilabel_average_precision(preds, target, num_labels, average='macro', thresholds=None, ignore_index=None, validate_args=True)[source]

Computes the average precision (AP) score for binary tasks. The AP score summarizes a precision-recall curve as an weighted mean of precisions at each threshold, with the difference in recall from the previous threshold as weight:

AP = \sum{n} (R_n - R_{n-1}) P_n

where P_n, R_n is the respective precision and recall at threshold index n. This value is equivalent to the area under the precision-recall curve (AUPRC).

Accepts the following input tensors:

  • preds (float tensor): (N, C, ...). Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.

  • target (int tensor): (N, C, ...). Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).

Additional dimension ... will be flattened into the batch dimension.

The implementation both supports calculating the metric in a non-binned but accurate version and a binned version that is less accurate but more memory efficient. Setting the thresholds argument to None will activate the non-binned version that uses memory of size \mathcal{O}(n_{samples}) whereas setting the thresholds argument to either an integer, list or a 1d tensor will use a binned version that uses memory of size \mathcal{O}(n_{thresholds} \times n_{labels}) (constant memory).

Parameters
  • preds (Tensor) – Tensor with predictions

  • target (Tensor) – Tensor with true labels

  • num_labels (int) – Integer specifing the number of labels

  • average (Optional[Literal[‘micro’, ‘macro’, ‘weighted’, ‘none’]]) –

    Defines the reduction that is applied over labels. Should be one of the following:

    • micro: Sum score over all labels

    • macro: Calculate score for each label and average them

    • weighted: Calculates score for each label and computes weighted average using their support

    • "none" or None: Calculates score for each label and applies no reduction

  • thresholds (Union[int, List[float], Tensor, None]) –

    Can be one of:

    • If set to None, will use a non-binned approach where thresholds are dynamically calculated from all the data. Most accurate but also most memory consuming approach.

    • If set to an int (larger than 1), will use that number of thresholds linearly spaced from 0 to 1 as bins for the calculation.

    • If set to an list of floats, will use the indicated thresholds in the list as bins for the calculation

    • If set to an 1d tensor of floats, will use the indicated thresholds in the tensor as bins for the calculation.

  • validate_args (bool) – bool indicating if input arguments and tensors should be validated for correctness. Set to False for faster computations.

Return type

Tensor

Returns

If average=None|”none” then a 1d tensor of shape (n_classes, ) will be returned with AP score per class. If average=”micro|macro”|”weighted” then a single scalar is returned.

Example

>>> from torchmetrics.functional.classification import multilabel_average_precision
>>> preds = torch.tensor([[0.75, 0.05, 0.35],
...                       [0.45, 0.75, 0.05],
...                       [0.05, 0.55, 0.75],
...                       [0.05, 0.65, 0.05]])
>>> target = torch.tensor([[1, 0, 1],
...                        [0, 0, 0],
...                        [0, 1, 1],
...                        [1, 1, 1]])
>>> multilabel_average_precision(preds, target, num_labels=3, average="macro", thresholds=None)
tensor(0.7500)
>>> multilabel_average_precision(preds, target, num_labels=3, average=None, thresholds=None)
tensor([0.7500, 0.5833, 0.9167])
>>> multilabel_average_precision(preds, target, num_labels=3, average="macro", thresholds=5)
tensor(0.7778)
>>> multilabel_average_precision(preds, target, num_labels=3, average=None, thresholds=5)
tensor([0.7500, 0.6667, 0.9167])