You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, WER filtering takes way too long with 8 workers, and going beyond 8 gives self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory. Also, it doesn't seem to cache filtered data which makes it too hard to run it for large data (up to 1M segments). Is there a way to expedite the filtering process?
The text was updated successfully, but these errors were encountered:
Hey @macabdul9 - do you have a bash file configuration you're using to reproduce this error? It would be super helpful to see what configuration you're using so as to advise more appropriately here
Generally speaking, you should ensure that the number of workers is less than or equal to the number of CPUs on your device (you can check this with the bash command lscpu).
I have replaced hf evaluate's WER metric with Jiwer's ( which I believe is same) and it fixes the issue. So mostly likely it has something to do with multiprocessing. Thanks.
Hi @sanchit-gandhi !
Currently, WER filtering takes way too long with 8 workers, and going beyond 8 gives
self.pid = os.fork() OSError: [Errno 12] Cannot allocate memory
. Also, it doesn't seem to cache filtered data which makes it too hard to run it for large data (up to 1M segments). Is there a way to expedite the filtering process?The text was updated successfully, but these errors were encountered: