Learned Perceptual Image Patch Similarity (LPIPS)¶
- class torchmetrics.image.lpip.LearnedPerceptualImagePatchSimilarity(net_type='alex', reduction='mean', compute_on_step=None, **kwargs)
The Learned Perceptual Image Patch Similarity (LPIPS_) is used to judge the perceptual similarity between two images. LPIPS essentially computes the similarity between the activations of two image patches for some pre-defined network. This measure has been shown to match human perseption well. A low LPIPS score means that image patches are perceptual similar.
Both input image patches are expected to have shape [N, 3, H, W] and be normalized to the [-1,1] range. The minimum size of H, W depends on the chosen backbone (see net_type arg).
using this metrics requires you to have
lpipspackage installed. Either install as
pip install torchmetrics[image]or
pip install lpips
this metric is not scriptable when using
torch<1.8. Please update your pytorch installation if this is a issue.
Forward only calls
update()and returns None if this is set to False.
Deprecated since version v0.8: Argument has no use anymore and will be removed v0.9.
>>> import torch >>> _ = torch.manual_seed(123) >>> from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity >>> lpips = LearnedPerceptualImagePatchSimilarity(net_type='vgg') >>> img1 = torch.rand(10, 3, 100, 100) >>> img2 = torch.rand(10, 3, 100, 100) >>> lpips(img1, img2) tensor(0.3566, grad_fn=<SqueezeBackward0>)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
Compute final perceptual similarity metric.
- Return type