Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Intel's GPU / XPU support to TIMM's validation script #2138

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

cfgfung
Copy link

@cfgfung cfgfung commented Apr 8, 2024

Background

This PR is mainly about increasing the diversity of accelerators. Intel has a new GPU released last year and it is promoting to different platforms. This is a first step to incorporate the new hardware into TIMM. More optimizations and engineering efforts will be applied.

Related materials:
https://pytorch.org/tutorials/recipes/intel_extension_for_pytorch.html
https://github.com/intel/intel-extension-for-pytorch/tree/xpu-master

Environment

PyTorch version: 2.1.0a0+cxx11.abi
PyTorch CXX11 ABI: Yes
IPEX version: 2.1.10+xpu
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0

GPU: Intel(R) Data Center GPU Max 1550
Dataset: Imagenet-1K validation

Result

Without autocast

python validate.py --data-dir ../datasets/ --dataset hfds/imagenet-1k --split validation --device xpu --model resnet50.a1_in1k --batch-size 256 --pretrained

image

With autocast:

python validate.py --data-dir ../datasets/ --dataset hfds/imagenet-1k --split validation --device xpu --model resnet50.a1_in1k --batch-size 256 --pretrained --amp

image

@rwightman
Copy link
Collaborator

@cfgfung Hi, thanks for the PR, I have to come up with a better plan for handling the growing number of accel differences, with #2109 etc need a better way to centralize this code and support easy addition without needing checks and imports in every script...

I had an idea in the works in the bitsandtpu branch for TPU support, but haven't had a chance to unify that here with what I've learned since.

@cfgfung
Copy link
Author

cfgfung commented Apr 11, 2024

Hi Ross,

Thanks for the prompt review! Do you have any design/draft plan in your mind? We can have some discussions on unifying the interface/APIs. I am working on Intel's GPU and Gaudi. Might be able to provide some insights for those. Please feel free to contact me through email, kwun.fung.lau@intel.com

Related information:
https://www.intc.com/news-events/press-releases/detail/1689/intel-unleashes-enterprise-ai-with-gaudi-3-ai-open-systems
https://www.intel.com/content/www/us/en/products/details/discrete-gpus/data-center-gpu/max-series.html

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants