Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Face state detection & categorization? #123

Open
skywalker999u opened this issue Feb 5, 2018 · 5 comments
Open

Face state detection & categorization? #123

skywalker999u opened this issue Feb 5, 2018 · 5 comments

Comments

@skywalker999u
Copy link

I noticed that training would look more decent (at least in preview) if it is trained with targeted pairs of data with similar face states (e.g. eyes opened/closed, mouth opened/closed, face direction, face partially obstructed/out of frame etc)
Just a suggestion, if categorization after alignment can be added as a feature to facilitate a more targeted training, wouldn't it be more efficient as there is no need to kinda brute force train against 2 pairs of data with too much disparity?

@skywalker999u skywalker999u changed the title Face state detection & categorization Face state detection & categorization? Feb 5, 2018
@Clorr
Copy link
Contributor

Clorr commented Feb 5, 2018

I was also thinking about that, just to have a report of face poses after extract. It would help people knowing if their training data contains same poses from src face and target face.

If you have some link to share with code, I would appreciate

@skywalker999u
Copy link
Author

skywalker999u commented Feb 5, 2018

I have little to no experience in computer vision and python, so I don't know whether my search results would be useful. And sorry but I currently have no hard determination to contribute a lot to the project. I am just a user with programming background and a bit of knowledge about machine learning and opencv.

Face direction:
https://www.learnopencv.com/head-pose-estimation-using-opencv-and-dlib/

Eye open/close detection: (the technique seems to be applicable for mouth too)
https://www.pyimagesearch.com/2017/04/24/eye-blink-detection-opencv-python-dlib/

@IUsedToBeAPygmy
Copy link

Dfaker has written some code to compare Faceset A to Faceset B to see the (dis)similarity between images so you can get an idea of which faces of A do not have a representative face in B...

https://github.com/dfaker/df/blob/master/imageGapsScanner.py

@dfaker
Copy link
Contributor

dfaker commented Feb 9, 2018

@IUsedToBeAPygmy Yep, it's in my main repo now, it just uses the MSE of the 2d face landmarks, nothing fancy but surprisingly effective, another reason to have the alignments saved rather than calculated on demand.

The warping in DF uses the same distance measure to warp an input face to have the proportions of the most similar opposing face.

@GuitarHero4
Copy link

In this article, under the section Training for Low-End GPU's, it says that it helps to split faces into frontal, three quarter and side views. But wouldn't that help for high end GPU's as well, as it should give a model that has a better fit?

I'd love to take a crack at adding a feature like this, but I am "not very well versed" in Python (read: never done Python). I have a feeling that the data is already there, since we already have the meshes and it should be a case of calculating the angle between the eyes or something like that. If someone could give me a pointer on where to get started I could try this. Though I think it might be semi-trivial for someone more experienced. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants