Shortcuts

Translation Edit Rate (TER)

Module Interface

class torchmetrics.TranslationEditRate(normalize=False, no_punctuation=False, lowercase=True, asian_support=False, return_sentence_level_score=False, **kwargs)[source]

Calculate Translation edit rate (TER) of machine translated text with one or more references.

This implementation follows the implmenetaions from https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/metrics/ter.py. The sacrebleu implmenetation is a near-exact reimplementation of the Tercom algorithm, produces identical results on all “sane” outputs.

Parameters
  • normalize (bool) – An indication whether a general tokenization to be applied.

  • no_punctuation (bool) – An indication whteher a punctuation to be removed from the sentences.

  • lowercase (bool) – An indication whether to enable case-insesitivity.

  • asian_support (bool) – An indication whether asian characters to be processed.

  • return_sentence_level_score (bool) – An indication whether a sentence-level TER to be returned.

  • kwargs (Any) – Additional keyword arguments, see Advanced metric settings for more info.

Example

>>> preds = ['the cat is on the mat']
>>> target = [['there is a cat on the mat', 'a cat is on the mat']]
>>> metric = TranslationEditRate()
>>> metric(preds, target)
tensor(0.1538)

References

[1] A Study of Translation Edit Rate with Targeted Human Annotation by Mathew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul TER

Initializes internal Module state, shared by both nn.Module and ScriptModule.

compute()[source]

Calculate the translate error rate (TER).

Return type

Union[Tensor, Tuple[Tensor, Tensor]]

Returns

A corpus-level translation edit rate (TER). (Optionally) A list of sentence-level translation_edit_rate (TER) if return_sentence_level_score=True.

update(preds, target)[source]

Update TER statistics.

Parameters
Return type

None

Functional Interface

torchmetrics.functional.translation_edit_rate(preds, target, normalize=False, no_punctuation=False, lowercase=True, asian_support=False, return_sentence_level_score=False)[source]

Calculate Translation edit rate (TER) of machine translated text with one or more references. This implementation follows the implmenetaions from https://github.com/mjpost/sacrebleu/blob/master/sacrebleu/metrics/ter.py. The sacrebleu implmenetation is a near-exact reimplementation of the Tercom algorithm, produces identical results on all “sane” outputs.

Parameters
  • preds (Union[str, Sequence[str]]) – An iterable of hypothesis corpus.

  • target (Sequence[Union[str, Sequence[str]]]) – An iterable of iterables of reference corpus.

  • normalize (bool) – An indication whether a general tokenization to be applied.

  • no_punctuation (bool) – An indication whteher a punctuation to be removed from the sentences.

  • lowercase (bool) – An indication whether to enable case-insesitivity.

  • asian_support (bool) – An indication whether asian characters to be processed.

  • return_sentence_level_score (bool) – An indication whether a sentence-level TER to be returned.

Return type

Union[Tensor, Tuple[Tensor, List[Tensor]]]

Returns

A corpus-level translation edit rate (TER). (Optionally) A list of sentence-level translation_edit_rate (TER) if return_sentence_level_score=True.

Example

>>> preds = ['the cat is on the mat']
>>> target = [['there is a cat on the mat', 'a cat is on the mat']]
>>> translation_edit_rate(preds, target)
tensor(0.1538)

References

[1] A Study of Translation Edit Rate with Targeted Human Annotation by Mathew Snover, Bonnie Dorr, Richard Schwartz, Linnea Micciulla and John Makhoul TER