Shortcuts

Label Ranking Loss

Module Interface

class torchmetrics.LabelRankingLoss(**kwargs)[source]

Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.

Parameters

kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics import LabelRankingLoss
>>> _ = torch.manual_seed(42)
>>> preds = torch.rand(10, 5)
>>> target = torch.randint(2, (10, 5))
>>> metric = LabelRankingLoss()
>>> metric(preds, target)
tensor(0.4167)

References

[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Computes the label ranking loss.

Return type

Tensor

update(preds, target, sample_weight=None)[source]
Parameters
  • preds (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should either be probabilities of the positive class or corresponding logits

  • target (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should only contain binary labels.

  • sample_weight (Optional[Tensor]) – tensor of shape N where N is the number of samples. How much each sample should be weighted in the final score.

Return type

None

Functional Interface

torchmetrics.functional.label_ranking_loss(preds, target, sample_weight=None)[source]

Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.

Parameters
  • preds (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should either be probabilities of the positive class or corresponding logits

  • target (Tensor) – tensor of shape [N,L] where N is the number of samples and L is the number of labels. Should only contain binary labels.

  • sample_weight (Optional[Tensor]) – tensor of shape N where N is the number of samples. How much each sample should be weighted in the final score.

Example

>>> from torchmetrics.functional import label_ranking_loss
>>> _ = torch.manual_seed(42)
>>> preds = torch.rand(10, 5)
>>> target = torch.randint(2, (10, 5))
>>> label_ranking_loss(preds, target)
tensor(0.4167)

References

[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.

Return type

Tensor

Read the Docs v: latest
Versions
latest
stable
v0.9.1
v0.9.0
v0.8.2
v0.8.1
v0.8.0
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.2
v0.6.1
v0.6.0
v0.5.1
v0.5.0
v0.4.1
v0.4.0
v0.3.2
v0.3.1
v0.3.0
v0.2.0
v0.1.0
refactor-structure
Downloads
pdf
html
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.