Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature: Any GPU optimization features? #1881

Open
jbrepogmailcom opened this issue May 23, 2023 · 0 comments
Open

Feature: Any GPU optimization features? #1881

jbrepogmailcom opened this issue May 23, 2023 · 0 comments

Comments

@jbrepogmailcom
Copy link

Feature Description

It seems to me that there is no advantage in using GPU in autokeras. For example, with low-end PC during fit the step duration is 4ms, while with GPU for thousands of dollars (NVIDIA T4) the step duration is 3ms and GPU is used for 20-25%. It could be good if there could be some profiler that can optimize parameters to use GPU.

I also tried the code below, but it did not improve performance.

Code Example

###### My special code here ##############
config = tf.compat.v1.ConfigProto()
config.gpu_options.allow_growth = True
sess = tf.compat.v1.Session(config=config)
##########################################
reg = ak.StructuredDataRegressor(project_name=modelName)

##########################################
with tf.device('/gpu:0'):
##########################################
  reg.fit(X_train, y_train, validation_split=vSplit)

Reason

Solution

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant