Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

padding_mode defaults to "reflect" causes Exception when input's time_length is smaller than half of kernel size. #2284

Open
chenjiasheng opened this issue Dec 1, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@chenjiasheng
Copy link

Describe the bug

A "reflect" padding mode tries to copy half of kernel size numbers of frames from the input,
It fails when input's time_length is smaller than half of kernel size.
It happens when I create a speechbrain.lobes.models.transformer.Transformer.TransformerEncoder with ffn_type=1dcnn with ffn_cnn_kernel_size_list=[5,5]. If the input has only one time frame, the error happens. By the way, the input for me is phoneme sequence, it is definitely a legal input to having only one phoneme.

Sorry that I can't provide a detailed stack traceback be cause I can't access company network now, and I hope the discription is clear enough.

Maybe it's safer to make Conv1D having padding_mode='zeros'?
Or, perhaps in this special case, fallback to replicate mode, and leaving legacy codes/models unaffected?

Expected behaviour

CNN1D works out-of-box for the special case when input's time_length is smaller than half of kernel size.

To Reproduce

No response

Environment Details

No response

Relevant Log Output

No response

Additional Context

No response

@chenjiasheng chenjiasheng added the bug Something isn't working label Dec 1, 2023
@chenjiasheng
Copy link
Author

@Adel-Moumen @BenoitWang

@asumagic
Copy link
Collaborator

asumagic commented Dec 4, 2023

AFAIK this is indeed an expected consequence of using reflect padding. I don't know if there's a "correct" solution other than changing the padding type for that specific usecase indeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants