Our annotated dataset can be downloaded below:-

Format: Compressed .zip File
Size: 129.08 MB
Experimental Setup (Image 1 | Image 2 | Image 3)

The root directory of the dataset contains the following files and folders (arranged in alphabetical order):-

  • Sample_Application (Folder): This folder contains the source code which can be used for viewing depth images and confidence maps. It also contains a framework which you can follow to run your hand pose estimation algorithm. This framework makes sure that the result file that is generated from your algorithm is in the recommended format.

  • Test_Dataset (Folder) and Training_Dataset (Folder): Each of these folders contain 15 sub-directories which consists of information pertaining to various hand poses for different subjects (the volunteers whose hand pose data has been recorded). Each sub-directory name begins with a unique subject_id for each subject. Inside each such sub-directory, you can find files related to 29 unique hand gestures. 8 of these gestures are from the Chinese Number Counting (CNC) of 1-10, while the remaining 21 are from American Sign Language (ASL) alphabet ranging from A-Z. It must be noted that J and Z are removed from the alphabet as they are both dynamic gestures. Furthermore, due to limitations in our ShapeHand data glove, it was difficult to obtain reliable ground truth data for some gestures (3, 7 from CNC and R, T, W from ASL) we considered here. In the end we are left with a total of 29 poses per subject.

    • The directory for each subject is named as 'subject-<subject id>', and contains 29 examples. Each (training) example contains the following four files that collectively correspond to an (instance, label) pair of a unique pose:-
      • subject-<subject id>-<gesture>.ini (Configuration File)
      • subject-<subject id>-<gesture>.skdepth (Depth Image)
      • subject-<subject id>-<gesture>.skconf (Confidence Map)
      • subject-<subject id>-<gesture>.shand (Data-glove Annotation)

    • Each test example, on the other hand, contains only the first three files (i.e. the instance).

  • Hand_Kinematic_Model.png (Image File): This .png image file contains the hand kinematic model of the right hand. This image is meant to give you an idea of the ordering of different joints on the hand. This ordering is important since your algorithm is also required to provide the estimated coordinates for each joint in the same order. Thus, your hand pose estimation algorithm should make sure that the same ordering is followed while generating the results.

  • readme.pdf (PDF File): This file contains more instructions on how to use the dataset in order to obtain results which can be evaluated by our system.

  • subject_pose_mappings.csv (CSV File): This file contains the list of all the files that are present in the Test_Dataset folder. This file is important because it provides the order in which your algorithm must estimate the different poses and also the order in which your results must be stored in the file that you will be uploading. The sample code given in the Sample_Application folder reads this CSV file to make sure that the correct ordering is followed.


  • In our terminology, a hand pose refers to a combination of specific hand gesture and its current orientation from the depth camera. The orientation is defined as the 3D rotation of a hand gesture from its canonical position (i.e. wrist straight & downward) with the hand base (i.e. wrist) at the coordinate origin.

  • For each sub-directory in the Test_Dataset and Training_Dataset folder, the first three files (i.e. the .ini, .skdepth, and .skconf files) completely describe the input depth data. The .ini file contains necessary basic information (e.g. image width & height) regarding the input depth image. The .skdepth and .skconf files are direct memory dump of the raw input (the depth and cofidence maps) from SoftKinectic, where for both maps, the value of each pixel is described by 2 bytes, and all pixel values of an image are stored in the scanline order from top-left to bottom-right. Finally, the .shand file (only for sub-directories in the Training_Dataset folder) contains the ground-truth annotation of its hand pose, measured with the dataglove. It is simply a comma-separated ASCII file containing 60 values, which denote the 3D positions of the 20 hand joints following the hand kinematic model discussed previously.

  • Issues with the Shapehand dataglove (as a ground truth measuring instrument): Empirically, we have evaluated the reliability of the Shapehand dataglove. We have done a number of simple but informative tests and observed that the Shapehand device produces reasonable and consistent measurements (i.e. within several mm accuracy) on all the finger joints except for the thumb, where significant errors are observed. We believe that the source of this error is in the design of the instrument. As a result, even though we have included the three thumb-related joints in our .shand files, they will be ignored during the performance evaluation. In other words, the three thumb-related joints in the hand kinematic model are not considered while evaluating the hand-pose estimation algorithm. The Shapehand dataglove also gives inaccurate measurements when the palm arches (bends) deeply. This forces us to withdraw from consideration several gestures including 3, 7, R, T and W.

  • "Result Submission" function had been stopped due to the web server maintaince issue. Sorry for any inconvenience caused. 2018-5-19.

  • You can evaluate the performance of your algorithm and compare it with the existing ones here by uploading your results via the 'Submit Results' tab at the top of this page. This page also provides more information on the file format for upload.

  • Once your results have been uploaded, you will be given an option to permanently save the results in our ranking table. If you choose to save your results, our system will view and analyse your results and evaluate the accuracy of your algorithm and display the output in our ranking table. The system will automatically take you to the 'View Evaluation Table' page which you can also access manually by clicking the 'View Evaluation Table' button at the top of this page.

  • If you would like to view a graphical summary of all the algorithms that have been evaluated by our system, you can click on the 'Graphical Summary' button at the top of this page. Two different types of graphs are displayed on that page. The first one is a histogram which consists of all the comparison algorithms sorted in ascending order of average joint errors. The second graph displays all the essential details of the top 5 most accurate algorithms evaluated by our system.