Shortcuts

Label Ranking Loss

Module Interface

class torchmetrics.LabelRankingLoss(**kwargs)[source]

Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.

Parameters

kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics import LabelRankingLoss
>>> _ = torch.manual_seed(42)
>>> preds = torch.rand(10, 5)
>>> target = torch.randint(2, (10, 5))
>>> metric = LabelRankingLoss()
>>> metric(preds, target)
tensor(0.4167)

References

[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Computes the label ranking loss.

Return type

Tensor

update(preds, target, sample_weight=None)[source]
Parameters
  • preds (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should either be probabilities of the positive class or corresponding logits

  • target (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should only contain binary labels.

  • sample_weight (Optional[Tensor]) – tensor of shape N where N is the number of samples. How much each sample should be weighted in the final score.

Return type

None

Functional Interface

torchmetrics.functional.label_ranking_loss(preds, target, sample_weight=None)[source]

Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.

Parameters
  • preds (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should either be probabilities of the positive class or corresponding logits

  • target (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should only contain binary labels.

  • sample_weight (Optional[Tensor]) – tensor of shape N where N is the number of samples. How much each sample should be weighted in the final score.

Example

>>> from torchmetrics.functional import label_ranking_loss
>>> _ = torch.manual_seed(42)
>>> preds = torch.rand(10, 5)
>>> target = torch.randint(2, (10, 5))
>>> label_ranking_loss(preds, target)
tensor(0.4167)

References

[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.

Return type

Tensor