# pruned_lightweight_openpose
**Repository Path**: avBuffer/pruned_lightweight_openpose
## Basic Information
- **Project Name**: pruned_lightweight_openpose
- **Description**: A Pruned Version for Lightweight OpenPose
- **Primary Language**: Unknown
- **License**: Not specified
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 0
- **Created**: 2021-02-19
- **Last Updated**: 2022-04-09
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# A Pruned Version for Lightweight OpenPose
This repository contains channel-pruned models for lightweight OpenPose ([Real-time 2D Multi-Person Pose Estimation on CPU: Lightweight OpenPose](https://arxiv.org/pdf/1811.12004.pdf)), and it mainly follows the work of [Daniil Osokin](https://github.com/Daniil-Osokin/lightweight-human-pose-estimation.pytorch). The channel pruning code is based on our ICCV 2019 submission which will be open-source after acceptance.
## Table of Contents
* [Requirements](#requirements)
* [Prerequisites](#prerequisites)
* [Training](#training)
* [Validation](#validation)
* [Demo](#demo)
* [Pruned model](#pruned_model)
* [Fine-tuned model](#fine-tuned_model)
* [Unpruned pre-trained model](#pre-trained-model)
## Requirements
* Ubuntu 16.04
* Python 3.6
* PyTorch 0.4.1 (should also work with 1.0, but not tested)
## Prerequisites
1. Download COCO 2017 dataset: [http://cocodataset.org/#download](http://cocodataset.org/#download) (train, val, annotations) and unpack it to `` folder.
2. Install requirements `pip install -r requirements.txt`
## Training
1. Fine-tune the pruned model. Run `CUDA_VISIBLE_DEVICES= python train_prune.py --train-images-folder ./coco/train2017/ --prepared-train-labels prepared_train_annotation.pkl --val-labels val_subset.json --val-images-folder ./coco/val2017/ --checkpoint-path ./pruned_models/ --num-refinement-stages 3 --experiment-name --weights-only`
## Validation
1. For training-time synchronous validation. Run `CUDA_VISIBLE_DEVICES= python val_per_epoch.py`
2. Validation for a specific checkpoint. Run `python val_prune_oneepoch.py --labels /annotations/person_keypoints_val2017.json --images-folder /val2017 --checkpoint-path `
## Demo
1. For a simple demo. Run `python demo.py --checkpoint-path ./fine-tuned_models/ --images `
## Pruned model
We provide two pruned models with different compression rate: ./pruned_models/0.3.pth.tar (reduce 15.92% flops) and ./pruned_models/0.8.tar.pth (reduce 25.6% flops).
## Fine-tuned model
The model fine-tuned from the pruned model `./pruned_models/0.3.pth.tar` is available in `./fine-tuned_models/`.
## Unpruned pre-trained model
The model expects normalized image (mean=[128, 128, 128], scale=[1/256, 1/256, 1/256]) in planar BGR format.
Pre-trained on COCO model is available at: ./pre-trained_models/checkpoint_iter_370000.pth.tar, it has 40% of AP on COCO validation set (38.6% of AP on the val *subset*).