-
Notifications
You must be signed in to change notification settings - Fork 723
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The training process cannot continue #1536
Comments
@xgySTATISICT Can you post your configurations used to train the model? |
I also encountered the same problem, and I tried both CPU and GPU, but couldn't continue. Here is my configuration. model = ClassificationModel(Model1, Model2,
args={'num_train_epochs':1,
'overwrite_output_dir': True,
'use_early_stopping':False,
'use_cuda':False,
'train_batch_size':50,
'do_lower_case':True,
'silent':False,
'no_cache':True,
'no_save':True
}
)
# Train the Model
model.train_model(train_df) |
@songzetao I have encountered similar problem and I tried the following workaround. You may try too. Add the following to your configurations. Basically we are turning off multiprocessing.
|
@DamithDR Thank you very much for your answer. It really worked. Thank you again!😊 |
@songzetao Glad it worked :) |
I encounter the same problem. I have tried to add several fixes from others, as below.
But still, the training stuck at: |
@swardiantara Can you post any logs you get and may be a screenshot where you got stuck? |
I tried to train, but the logs stopped updating at this step, even after 12 hours.
The text was updated successfully, but these errors were encountered: