Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix table transformers cuda export #1775

Closed
wants to merge 1 commit into from
Closed

Conversation

mht-sharma
Copy link
Contributor

@mht-sharma mht-sharma commented Mar 26, 2024

As per title!

Issue: #1774

@@ -274,8 +274,7 @@ def _run_validation(

# Create ONNX Runtime session
session_options = SessionOptions()
# We could well set ORT_DISABLE_ALL here, but it makes CUDA export with O4 of gpt_neo fail
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Little confused by comment, does the export of gpt_neo fails with DISABLE?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

According to the comment at least

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tried with EleutherAI/gpt-neo-1.3B and now the gpt-neo export seems to work with the DISABLE_ALL.

@mht-sharma mht-sharma requested a review from fxmarty March 26, 2024 10:44
@fxmarty
Copy link
Collaborator

fxmarty commented Mar 26, 2024

@mht-sharma I'm not sure about this - actually we may be better off sticking with GraphOptimizationLevel::ORT_ENABLE_ALL during validation as this is the default of ORT, better to catch issues ahead?

But then again someone may use the ONNX export and then not use ORT.

@mht-sharma
Copy link
Contributor Author

@mht-sharma I'm not sure about this - actually we may be better off sticking with GraphOptimizationLevel::ORT_ENABLE_ALL during validation as this is the default of ORT, better to catch issues ahead?
But then again someone may use the ONNX export and then not use ORT.

I agree with your points. Perhaps such a test would be more suitable for ORTModel than export? Given the current error message, users might assume that the model isn't supported, when in fact they simply need to disable the optimizations?

Alternative, is obviously to have conditional statements to make it pass.

@mht-sharma mht-sharma closed this May 21, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants