Shortcuts

BLEU Score

Module Interface

class torchmetrics.BLEUScore(n_gram=4, smooth=False, compute_on_step=None, **kwargs)[source]

Calculate BLEU score of machine translated text with one or more references.

Parameters
  • n_gram (int) – Gram value ranged from 1 to 4

  • smooth (bool) – Whether or not to apply smoothing, see [2]

  • compute_on_step (Optional[bool]) –

    Forward only calls update() and returns None if this is set to False.

    Deprecated since version v0.8: Argument has no use anymore and will be removed v0.9.

  • kwargs (Dict[str, Any]) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> from torchmetrics import BLEUScore
>>> preds = ['the cat is on the mat']
>>> target = [['there is a cat on the mat', 'a cat is on the mat']]
>>> metric = BLEUScore()
>>> metric(preds, target)
tensor(0.7598)

References

[1] BLEU: a Method for Automatic Evaluation of Machine Translation by Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu BLEU

[2] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics by Chin-Yew Lin and Franz Josef Och Machine Translation Evolution

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Calculate BLEU score.

Return type

Tensor

Returns

Tensor with BLEU Score

update(preds, target)[source]

Compute Precision Scores.

Parameters
Return type

None

Functional Interface

torchmetrics.functional.bleu_score(preds, target, n_gram=4, smooth=False)[source]

Calculate BLEU score of machine translated text with one or more references.

Parameters
Return type

Tensor

Returns

Tensor with BLEU Score

Example

>>> from torchmetrics.functional import bleu_score
>>> preds = ['the cat is on the mat']
>>> target = [['there is a cat on the mat', 'a cat is on the mat']]
>>> bleu_score(preds, target)
tensor(0.7598)

References

[1] BLEU: a Method for Automatic Evaluation of Machine Translation by Papineni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu BLEU

[2] Automatic Evaluation of Machine Translation Quality Using Longest Common Subsequence and Skip-Bigram Statistics by Chin-Yew Lin and Franz Josef Och Machine Translation Evolution

Read the Docs v: v0.8.0
Versions
latest
stable
v0.8.0
v0.7.3
v0.7.2
v0.7.1
v0.7.0
v0.6.2
v0.6.1
v0.6.0
v0.5.1
v0.5.0
v0.4.1
v0.4.0
v0.3.2
v0.3.1
v0.3.0
v0.2.0
v0.1.0
Downloads
On Read the Docs
Project Home
Builds

Free document hosting provided by Read the Docs.