Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pretrained Model availability #18

Closed
kmkshatriya opened this issue Dec 26, 2023 · 8 comments
Closed

Pretrained Model availability #18

kmkshatriya opened this issue Dec 26, 2023 · 8 comments

Comments

@kmkshatriya
Copy link

Great work! Could you please share the pre-trained model checkpoint so that we can test how it works?

@sahilg06
Copy link
Owner

Hi,
Actually, I lost track of our best model weights. But I have some model weights (which may not be best)
I am attaching the links to the weights.
https://drive.google.com/file/d/1yNytUV2qI9RRbB_NMPy-Hgo4b0a-d76F/view?usp=sharing
https://drive.google.com/file/d/1Z_J4xJmlyjue8Th8bl95cC60kOD4sZlZ/view?usp=sharing
I hope it helps!

@kmkshatriya
Copy link
Author

Thanks for sharing. I tried both the model weights. PL+DA did not work but PRE did work but it produced very bad results.

result

@sahilg06
Copy link
Owner

As I said earlier, I am not sure about the weights I shared. You can try training on your own with instructions provided in Readme. Further, you can also use a better dataset than CREMA-D for training (such as MEAD dataset).
Also, what's the issue with PL+DA?

@kmkshatriya
Copy link
Author

kmkshatriya commented Dec 27, 2023

Thanks again for your response. Yes, I understand that weight was probably meant for a male character, and a better model could yield better results.

I apologize for the miscommunication regarding PL+DA. I rechecked it today and it also worked.

@sahilg06
Copy link
Owner

sahilg06 commented Dec 27, 2023

Hi @kmkshatriya. No, that weight wasn't for only male character. Every model was trained on full CREMA-D dataset. Following can be the issues:

  • As dataset used for training (CREMA_D) is quite simple and short, so data augmentation is required to generalize model better. So data augmentation might not have used in provided weights.
  • Moreover, specs are not present in training dataset, so that's may be an issue for model.

Whatever maybe the case, the limitation of our model was that it was trained on simple and short dataset. So training on more complex datasets such as MEAD will definitely improve the results. If you have bandwidth for the same, you can try that.

Also, did PL+DA give better results?

@kmkshatriya
Copy link
Author

kmkshatriya commented Dec 27, 2023

Hi @sahilg06, Yes PL+DA gave better result. PRE the expressions were more exaggerated but PL+DA is better because it gives less exaggerated expressions. But both look male and not suitable for female or faces with accessories as suggested.

Is there any existing option in this to meter or control the expression levels?

@sahilg06
Copy link
Owner

sahilg06 commented Dec 27, 2023

There is an option, but I haven't tried it. Currently emotion is passed as one hot vector input to emotion encoder. For eg: say there are 6 emotions and you are using "happy" during inference. So the input will look like this [1,0,0,0,0,0]. Similarly for other emotions. I don't remember the exact order of emotions used.
Instead of passing exact emotion via [1,0,0,0,0,0] (happy) and [0,0,1,0,0,0] (sad). You can try interpolating between them say something like [0.5,0,0.5,0,0,0].

Here the categorical emotion label is converted to one-hot-vector during inference:

emotion = to_categorical(emotion, num_classes=6)

So you just have to change to_categorical function according to your reqs.

@sahilg06 sahilg06 pinned this issue Dec 27, 2023
@kmkshatriya
Copy link
Author

Great. I had a similar thought of interpolating between the emoted frames with the neutral ones to have a control over the final output. I will try as suggested.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants