Retrieval Recall

Module Interface

class torchmetrics.retrieval.RetrievalRecall(empty_target_action='neg', ignore_index=None, top_k=None, aggregation='mean', **kwargs)[source]

Compute IR Recall.

Works with binary target data. Accepts float predictions from a model output.

As input to forward and update the metric accepts the following input:

  • preds (Tensor): A float tensor of shape (N, ...)

  • target (Tensor): A long or bool tensor of shape (N, ...)

  • indexes (Tensor): A long tensor of shape (N, ...) which indicate to which query a prediction belongs

As output to forward and compute the metric returns the following output:

  • r@k (Tensor): A single-value tensor with the recall (at top_k) of the predictions preds w.r.t. the labels target

All indexes, preds and target must have the same dimension and will be flatten at the beginning, so that for example, a tensor of shape (N, M) is treated as (N * M, ). Predictions will be first grouped by indexes and then will be computed as the mean of the metric over each query.

Parameters:
  • empty_target_action (str) –

    Specify what to do with queries that do not have at least a positive target. Choose from:

    • 'neg': those queries count as 0.0 (default)

    • 'pos': those queries count as 1.0

    • 'skip': skip those queries; if all queries are skipped, 0.0 is returned

    • 'error': raise a ValueError

  • ignore_index (Optional[int]) – Ignore predictions where the target is equal to this number.

  • top_k (Optional[int]) – Consider only the top k elements for each query (default: None, which considers them all)

  • aggregation (Union[Literal['mean', 'median', 'min', 'max'], Callable]) –

    Specify how to aggregate over indexes. Can either a custom callable function that takes in a single tensor and returns a scalar value or one of the following strings:

    • 'mean': average value is returned

    • 'median': median value is returned

    • 'max': max value is returned

    • 'min': min value is returned

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Raises:
  • ValueError – If empty_target_action is not one of error, skip, neg or pos.

  • ValueError – If ignore_index is not None or an integer.

  • ValueError – If top_k is not None or not an integer greater than 0.

Example

>>> from torch import tensor
>>> from torchmetrics.retrieval import RetrievalRecall
>>> indexes = tensor([0, 0, 0, 1, 1, 1, 1])
>>> preds = tensor([0.2, 0.3, 0.5, 0.1, 0.3, 0.5, 0.2])
>>> target = tensor([False, False, True, False, True, False, True])
>>> r2 = RetrievalRecall(top_k=2)
>>> r2(preds, target, indexes=indexes)
tensor(0.7500)
plot(val=None, ax=None)[source]

Plot a single or multiple values from the metric.

Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

Return type:

Tuple[Figure, Union[Axes, ndarray]]

Returns:

Figure and Axes object

Raises:

ModuleNotFoundError – If matplotlib is not installed

>>> import torch
>>> from torchmetrics.retrieval import RetrievalRecall
>>> # Example plotting a single value
>>> metric = RetrievalRecall()
>>> metric.update(torch.rand(10,), torch.randint(2, (10,)), indexes=torch.randint(2,(10,)))
>>> fig_, ax_ = metric.plot()
../_images/recall-11.png
>>> import torch
>>> from torchmetrics.retrieval import RetrievalRecall
>>> # Example plotting multiple values
>>> metric = RetrievalRecall()
>>> values = []
>>> for _ in range(10):
...     values.append(metric(torch.rand(10,), torch.randint(2, (10,)), indexes=torch.randint(2,(10,))))
>>> fig, ax = metric.plot(values)
../_images/recall-21.png

Functional Interface

torchmetrics.functional.retrieval.retrieval_recall(preds, target, top_k=None)[source]

Compute the recall metric for information retrieval.

Recall is the fraction of relevant documents retrieved among all the relevant documents.

preds and target should be of the same shape and live on the same device. If no target is True, 0 is returned. target must be either bool or integers and preds must be float, otherwise an error is raised. If you want to measure Recall@K, top_k must be a positive integer.

Parameters:
  • preds (Tensor) – estimated probabilities of each document to be relevant.

  • target (Tensor) – ground truth about each document being relevant or not.

  • top_k (Optional[int]) – consider only the top k elements (default: None, which considers them all)

Return type:

Tensor

Returns:

A single-value tensor with the recall (at top_k) of the predictions preds w.r.t. the labels target.

Raises:

ValueError – If top_k parameter is not None or an integer larger than 0

Example

>>> from  torchmetrics.functional import retrieval_recall
>>> preds = tensor([0.2, 0.3, 0.5])
>>> target = tensor([True, False, True])
>>> retrieval_recall(preds, target, top_k=2)
tensor(0.5000)