Label Ranking Loss¶
Module Interface¶
- class torchmetrics.LabelRankingLoss(**kwargs)[source]
Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.
- Parameters
kwargs¶ (
Any
) – Additional keyword arguments, see Advanced metric settings for more info.
Example
>>> from torchmetrics import LabelRankingLoss >>> _ = torch.manual_seed(42) >>> preds = torch.rand(10, 5) >>> target = torch.randint(2, (10, 5)) >>> metric = LabelRankingLoss() >>> metric(preds, target) tensor(0.4167)
References
[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- update(preds, target, sample_weight=None)[source]
- Parameters
preds¶ (
Tensor
) – tensor of shape[N,L]
whereN
is the number of samples andL
is the number of labels. Should either be probabilities of the positive class or corresponding logitstarget¶ (
Tensor
) – tensor of shape[N,L]
whereN
is the number of samples andL
is the number of labels. Should only contain binary labels.sample_weight¶ (
Optional
[Tensor
]) – tensor of shapeN
whereN
is the number of samples. How much each sample should be weighted in the final score.
- Return type
- class torchmetrics.classification.MultilabelRankingLoss(num_labels, ignore_index=None, validate_args=True, **kwargs)[source]
Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.
Accepts the following input tensors:
preds
(float tensor):(N, C, ...)
. Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.target
(int tensor):(N, C, ...)
. Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).
Additional dimension
...
will be flattened into the batch dimension.- Parameters
preds¶ – Tensor with predictions
target¶ – Tensor with true labels
ignore_index¶ (
Optional
[int
]) – Specifies a target value that is ignored and does not contribute to the metric calculationvalidate_args¶ (
bool
) – bool indicating if input arguments and tensors should be validated for correctness. Set toFalse
for faster computations.
Example
>>> from torchmetrics.classification import MultilabelRankingLoss >>> _ = torch.manual_seed(42) >>> preds = torch.rand(10, 5) >>> target = torch.randint(2, (10, 5)) >>> metric = MultilabelRankingLoss(num_labels=5) >>> metric(preds, target) tensor(0.4167)
References
[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
Initializes internal Module state, shared by both nn.Module and ScriptModule.
- compute()[source]
Override this method to compute the final metric value from state variables synchronized across the distributed backend.
- Return type
Functional Interface¶
- torchmetrics.functional.label_ranking_loss(preds, target, sample_weight=None)[source]
Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.
- Parameters
preds¶ (
Tensor
) – tensor of shape[N,L]
whereN
is the number of samples andL
is the number of labels. Should either be probabilities of the positive class or corresponding logitstarget¶ (
Tensor
) – tensor of shape[N,L]
whereN
is the number of samples andL
is the number of labels. Should only contain binary labels.sample_weight¶ (
Optional
[Tensor
]) – tensor of shapeN
whereN
is the number of samples. How much each sample should be weighted in the final score.
Example
>>> from torchmetrics.functional import label_ranking_loss >>> _ = torch.manual_seed(42) >>> preds = torch.rand(10, 5) >>> target = torch.randint(2, (10, 5)) >>> label_ranking_loss(preds, target) tensor(0.4167)
References
[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
- Return type
- torchmetrics.functional.classification.multilabel_ranking_loss(preds, target, num_labels, ignore_index=None, validate_args=True)[source]
Computes the label ranking loss for multilabel data [1]. The score is corresponds to the average number of label pairs that are incorrectly ordered given some predictions weighted by the size of the label set and the number of labels not in the label set. The best score is 0.
Accepts the following input tensors:
preds
(float tensor):(N, C, ...)
. Preds should be a tensor containing probabilities or logits for each observation. If preds has values outside [0,1] range we consider the input to be logits and will auto apply sigmoid per element.target
(int tensor):(N, C, ...)
. Target should be a tensor containing ground truth labels, and therefore only contain {0,1} values (except if ignore_index is specified).
Additional dimension
...
will be flattened into the batch dimension.- Parameters
Example
>>> from torchmetrics.functional.classification import multilabel_ranking_loss >>> _ = torch.manual_seed(42) >>> preds = torch.rand(10, 5) >>> target = torch.randint(2, (10, 5)) >>> multilabel_ranking_loss(preds, target, num_labels=5) tensor(0.4167)
References
[1] Tsoumakas, G., Katakis, I., & Vlahavas, I. (2010). Mining multi-label data. In Data mining and knowledge discovery handbook (pp. 667-685). Springer US.
- Return type