You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Each call of the error rate accumulates the distance and length. Why is that?Is it to have a running average kind of thing?
Why don't you just return the point-wise wer? @upskyy
The text was updated successfully, but these errors were encountered:
@OleguerCanal We did so because it's good to understand the tendency when it's accumulated as a whole. Is there a reason why you want to do point-wise wer?
This makes sense but if using wandb or tensorboard it can already be smoothed like this right?
I'm saying because I was training an architecture with a ctc head and an attn head and wanted to compare the wers of each one. Since I used the same instance of wer_estimator, the values got mixed without me knowing
Hi @OleguerCanal! Thank you for your good opinion!
I recycled the code that I made before, so I made it like this. @upskyy As @OleguerCanal said, why don't we add a way to show the ER for each batch? Let's add this as an option. --error_rate_logging: accumulate, batch like this?
❓ Questions & Help
Details
Each call of the error rate accumulates the distance and length. Why is that?Is it to have a running average kind of thing?
Why don't you just return the point-wise wer? @upskyy
The text was updated successfully, but these errors were encountered: