Dongfeng Gu

Tensorflow: 3D-DenseNet for action detection

- 3 mins



Project ReadMe

3D-DenseNet with TensorFlow

Two types of `Densely Connected Convolutional Networks DenseNets are available:

Each model can be tested on such datasets:

A number of layers, blocks, growth rate, video normalization and other training params may be changed trough shell or inside the source code.

There are also many other implementations they may be useful also.

Citation for DenseNet:

        author = {Huang, Gao and Liu, Zhuang and Weinberger, Kilian Q.},
        title = {Densely Connected Convolutional Networks},
        journal = {arXiv preprint arXiv:1608.06993},
        year = {2016}

Step 1: Data preparation

  1. Download the UCF101 (Action Recognition Data Set).
  2. Extract the UCF101.rar file and you will get UCF101/<action_name>/<video_name.avi> folder structure.
  3. Use the ./data_prepare/ script to decode the UCF101 video files to image files.
    • run ./data_prepare/ ../UCF101 5 (number 5 means the fps rate)
  4. Use the ./data_prepare/ script to create/update the {train,test}.list according to the new UCF101 image folder structure generated from last step (from images to files).
    • run ./data_prepare/ .../UCF101 4, this will update the test.list and train.list files (number 4 means the ratio of test and train data is 1/4)
    • train.list:
        database/ucf101/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c01 0
        database/ucf101/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c02 0
        database/ucf101/train/ApplyEyeMakeup/v_ApplyEyeMakeup_g01_c03 0
        database/ucf101/train/ApplyLipstick/v_ApplyLipstick_g01_c01 1
        database/ucf101/train/ApplyLipstick/v_ApplyLipstick_g01_c02 1
        database/ucf101/train/ApplyLipstick/v_ApplyLipstick_g01_c03 1
        database/ucf101/train/Archery/v_Archery_g01_c01 2
        database/ucf101/train/Archery/v_Archery_g01_c02 2
        database/ucf101/train/Archery/v_Archery_g01_c03 2
        database/ucf101/train/Archery/v_Archery_g01_c04 2
        database/ucf101/train/BabyCrawling/v_BabyCrawling_g01_c01 3
        database/ucf101/train/BabyCrawling/v_BabyCrawling_g01_c02 3
        database/ucf101/train/BabyCrawling/v_BabyCrawling_g01_c03 3
        database/ucf101/train/BabyCrawling/v_BabyCrawling_g01_c04 3
        database/ucf101/train/BalanceBeam/v_BalanceBeam_g01_c01 4
        database/ucf101/train/BalanceBeam/v_BalanceBeam_g01_c02 4
        database/ucf101/train/BalanceBeam/v_BalanceBeam_g01_c03 4
        database/ucf101/train/BalanceBeam/v_BalanceBeam_g01_c04 4
  5. Copy/Cut the test.list and train.list files to the data_providers folders.

Step 2: Train or Test the model



Test results on MERL dataset. Video normalization per channels was used.

Model type Depth MERL
DenseNet(k = 12) 20 70%

Approximate training time for models on GeForce GTX TITAN X (12 GB memory):


Repo supported with requirements file - so the easiest way to install all just run pip install -r requirements.txt.

comments powered by Disqus
rss facebook twitter github youtube mail spotify instagram linkedin google pinterest medium vimeo