Skip to content
/ handpose Public

CrossInfoNet of CVPR 2019 for hand pose estimation

Notifications You must be signed in to change notification settings

dumyy/handpose

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CrossInfoNet: Multi-Task Information Sharing Based Hand Pose Estimation

This respository contains the implementation details of this paper

The project page can be found here.

~~I have graduated from the university as a master, so this rep. may not be updated anymore. ~~

Requirments

  • python 2.7
  • tensorflow == 1.3~1.9
  • matplotlib < 3.0
  • numpy
  • scipy
  • pillow
  • some other packages important

our code is tested in Ubuntu 14.04 and 16.04 environment with GTX 1080 and RTX 2080 TI.

2 examples

  • config 1: gtx1080+cuda9.0+cudnn7.x+tensorflow1.9+ubuntu16.04
  • config 2: rtx2080ti+cuda10+cudnn7.x+tensorflow1.13+ubuntu16.04

You should match right cudnn version in this site

Data Reprocessing

Download the datasets (ICVL, NYU, and MSRA).

Thanks DeepPrior++ for providing the base data reprocess and online data augmentation codes.

We use the precomputed centers of V2V-PoseNet@mks0601 when training ICVL and NYU datasets.

Please refer to cache/${dataset-name}/readme.md for more details.

Traing and Testing

Here we provide an example for NYU training.

cd $ROOT
cd network/NYU
python train_and_test.py

Here $ROOT is the root path that you put this project.

For testing, just run the command in the path $ROOT/network/NYU/

python test_nyu_cross.py

For the MSRA dataset, just cd $ROOT/network/MSRA/ directory, then run the train or test file, as follow:

train:  python train_and_test.py --test-sub ${sub-num}
test:   python test_msra.py --test-sub ${sub-num}

${sub-num} is the subject that you use to test while cross-validation.

In the end, you can use python combtxt.py to combine the 9 test results.

Results

When testing, the model outputs the mean joint error. If you want to show the qualitative results, just let the visual=True. We use awesome-hand-pose-estimation to evaluate the accuracy of the proposed CrossInfoNet on the ICVL, NYU and MSRA datasets. The predicted labels are here.

We also tested the perfomance on the HANDS 17 frame-based hand pose estiamtion challenge dataset. Here is the result on Feb.2, 2019.

hands

Realtime demo

More details can be found in the realtime_demo directory.

About

CrossInfoNet of CVPR 2019 for hand pose estimation

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages