Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Training failed. The lip shape of a character cannot change according to changes in speech #145

Open
Liming-belief opened this issue May 11, 2024 · 6 comments

Comments

@Liming-belief
Copy link

Hello, I trained Syncnet and wav2lip to reduce the loss to between 0.25-0.3, but after actual inference, I found that the lip shape of the character is not moving. May I ask what is the reason for this?

@see2run
Copy link

see2run commented May 20, 2024

are Percep, Fake, and Real always 0.0?

@Liming-belief
Copy link
Author

There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run

@see2run
Copy link

see2run commented May 20, 2024

There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run

Did you not change the script to train SyncNet and Wav2Lip, or make any modifications?

@Liming-belief
Copy link
Author

There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。 @see2run

Did you not change the script to train SyncNet and Wav2Lip, or make any modifications?

Train using train_syncnet_sam.py and hq_wav2lip_sam_train.py without making any changes to the code

@kavita-gsphk
Copy link

@Liming-belief While training syncnet, did you face an issue where loss gets stuck? I am stuck there, so I need help.

@see2run
Copy link

see2run commented May 21, 2024

There was no occurrence of the situation you mentioned,Step 191706 | L1: 0.04096 | Vgg: 0.1543 | SW: 0.03 | Sync: 3.293 | DW: 0.025 | Percep: 1.905 | Fake: 0.188, Real: 0.2206 | Load: 0.0115, Train: 1.85。 The current generated lip shape is the same as the lip shape in the reference frame, but different from the actual lip shape。

I have a question, at the beginning, are the values for sync, dw, percep, fake, and real all 0.0 like this or not?

Step 683 | L1: 0.09317 | Vgg: 0.3026 | SW: 0.03 | Sync: 0.0 | DW: 0.0 | Percep: 0.0 | Fake: 0.0, Real: 0.0 | Load: 0.008834, Train: 1.229

or have their values changed from the beginning?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants