You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
That is because torch vision.transforms.Resize interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall, torchvision.transforms is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations.
Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
Original example with
albumentations
is correctThat is because
torch vision.transforms.Resize
interpolates with bilinear everything which is wrong when used for segmentation labels - you just cannot mix them. Overall,torchvision.transforms
is designed for classification only and cannot be used to images and masks together, unless you write two separate branches of augmentations.The correct way would be to use
v2
version of transforms and convert the segmentation labels to https://pytorch.org/vision/main/generated/torchvision.tv_tensors.Mask.html#torchvision.tv_tensors.Mask objectSteps to reproduce the bug
Go to the website.
https://huggingface.co/docs/datasets/en/semantic_segmentation
Expected behavior
Results, similar to
albumentation
. Or remove the torch vision part altogether. Or usekornia
instead.Environment info
Irrelevant
The text was updated successfully, but these errors were encountered: