Sepformer training time #2414
Closed
HuangZiliAndy
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi I am training a sepformer on WHAMR 16kHz using Nvidia V100. I am following the setup in https://huggingface.co/speechbrain/sepformer-whamr16k, and didn't use speed perturbation and dynamic mixing. However, I found the training quite slow. It takes me around 2 hours to train one epoch, which would take 16 days to finish 200 epochs of training.
I wonder if you could share your setup for training sepformer. Did you use multiple GPUs and how long does it take? I noticed that the batch size is set to 1, so I am not very surprised the training is quite slow. Your help would be greatly appreciated!
Beta Was this translation helpful? Give feedback.
All reactions