Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Discussion] ORT GPU binaries do not contain DML #20638

Open
gedoensmax opened this issue May 10, 2024 · 2 comments
Open

[Discussion] ORT GPU binaries do not contain DML #20638

gedoensmax opened this issue May 10, 2024 · 2 comments
Labels
ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform

Comments

@gedoensmax
Copy link
Contributor

Describe the issue

From what I understand DirectML is considered the default GPU backend on Windows systems. Nonetheless the "GPU" build does not contain DML. In my opinion it would make more sense to have the GPU package include DirectML besides CUDA and TRT. This package will then also run on any GPU. In addition such package can dynamically load OpenVino for additional GPU support on Intel.

To reproduce

When downloading a release package just call GetAvailableProviders(). This will report the supported providers. Another problem is that when dynamically loading OpenVINO it will never report it as supported provider.

Urgency

No response

Platform

Windows

OS Version

11

ONNX Runtime Installation

Released Package

ONNX Runtime Version or Commit ID

1.17.3

ONNX Runtime API

C++

Architecture

X64

Execution Provider

CUDA, DirectML, TensorRT

Execution Provider Library Version

No response

@github-actions github-actions bot added ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform labels May 10, 2024
@gedoensmax
Copy link
Contributor Author

@snnn or @pranavsharma can you help with this ?

@tenten8401
Copy link

tenten8401 commented May 24, 2024

+1 on this, currently making a custom build for DirectML support

Something I'm running into is that I can build it with CUDA or DirectML, but when I try both in the same build it fails because it can't find a bunch of dml stuff.. possibly related to why there are no combined builds by default?

.\build.bat --config RelWithDebInfo --parallel --skip_tests --build_shared_lib --compile_no_warning_as_error --use_cuda --cudnn_home "C:\Program Files\NVIDIA\CUDNN\v9.1" --cuda_home "C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.4" --use_tensorrt --tensorrt_home "F:\Downloads\TensorRT-10.0.1.6" --use_dml

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ep:CUDA issues related to the CUDA execution provider ep:DML issues related to the DirectML execution provider ep:OpenVINO issues related to OpenVINO execution provider ep:TensorRT issues related to TensorRT execution provider platform:windows issues related to the Windows platform
Projects
None yet
Development

No branches or pull requests

2 participants