Learned Perceptual Image Patch Similarity (LPIPS)¶
- class torchmetrics.image.lpip.LearnedPerceptualImagePatchSimilarity(net_type='alex', reduction='mean', normalize=False, **kwargs)
The Learned Perceptual Image Patch Similarity (LPIPS_) is used to judge the perceptual similarity between two images. LPIPS essentially computes the similarity between the activations of two image patches for some pre-defined network. This measure has been shown to match human perseption well. A low LPIPS score means that image patches are perceptual similar.
Both input image patches are expected to have shape [N, 3, H, W]. The minimum size of H, W depends on the chosen backbone (see net_type arg).
using this metrics requires you to have
lpipspackage installed. Either install as
pip install torchmetrics[image]or
pip install lpips
this metric is not scriptable when using
torch<1.8. Please update your pytorch installation if this is a issue.
>>> import torch >>> _ = torch.manual_seed(123) >>> from torchmetrics.image.lpip import LearnedPerceptualImagePatchSimilarity >>> lpips = LearnedPerceptualImagePatchSimilarity(net_type='vgg') >>> img1 = torch.rand(10, 3, 100, 100) >>> img2 = torch.rand(10, 3, 100, 100) >>> lpips(img1, img2) tensor(0.3566, grad_fn=<SqueezeBackward0>)
Initializes internal Module state, shared by both nn.Module and ScriptModule.
Compute final perceptual similarity metric.
- Return type