# pytorch-jacinto-ai-devkit
**Repository Path**: messier201/pytorch-jacinto-ai-devkit
## Basic Information
- **Project Name**: pytorch-jacinto-ai-devkit
- **Description**: No description available
- **Primary Language**: Unknown
- **License**: MIT
- **Default Branch**: master
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 0
- **Forks**: 0
- **Created**: 2021-06-30
- **Last Updated**: 2024-05-29
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# Jacinto-AI-DevKit (PyTorch)
Training & Quantization Tools For Embedded AI Development - in PyTorch.
## Notice
- If you have not visited our landing page in github, please do so: [https://github.com/TexasInstruments/jacinto-ai-devkit](https://github.com/TexasInstruments/jacinto-ai-devkit)
- **Issue Tracker for jacinto-ai-devkit:** You can file issues or ask questions at **e2e**: [https://e2e.ti.com/support/processors/f/791/tags/jacinto_2D00_ai_2D00_devkit](https://e2e.ti.com/support/processors/f/791/tags/jacinto_2D00_ai_2D00_devkit). While creating a new issue kindly include **jacinto-ai-devkit** in the tags (as you create a new issue, there is a space to enter tags, at the bottom of the page).
- **Issue Tracker for TIDL:** [https://e2e.ti.com/support/processors/f/791/tags/TIDL](https://e2e.ti.com/support/processors/f/791/tags/TIDL). Please include the tag **TIDL** (as you create a new issue, there is a space to enter tags, at the bottom of the page).
- If you do not get a reply within two days, please contact us at: jacinto-ai-devkit@list.ti.com
## Introduction
This code provides a set of low complexity deep learning examples and models for low power embedded systems. Low power embedded systems often requires balancing of complexity and accuracy. This is a tough task and requires significant amount of expertise and experimentation. We call this process **complexity optimization**. In addition we would like to bridge the gap between Deep Learning training frameworks and real-time embedded inference by providing ready to use examples and enable **ease of use**. Scripts for training, validation, complexity analysis are also provided.
This code also includes tools for **Quantization Aware Training** that can output an 8-bit Quantization friendly model - these tools can be used to improve the quantized accuracy and bring it near floating point accuracy. For more details, please refer to the section on [Quantization](docs/Quantization.md).
**Several of these models have been verified to work on [TI's Jacinto7 Automotive Processors](http://www.ti.com/processors/automotive-processors/tdax-adas-socs/overview.html).** These tools and software are primarily intended as examples for learning and research.
## Installation Instructions
These instructions are for installation on **Ubuntu 18.04**.
Install Anaconda with Python 3.7 or higher from https://www.anaconda.com/distribution/
After installation, make sure that your python is indeed Anaconda Python 3.7 or higher by typing:
```
python --version
```
Clone this repository into your local folder
Execute the following shell script to install the dependencies:
```
./setup.sh
```
## Examples
Below are some of the examples are currently available. Click on each of the links above to go into the full description of the example.
[**Image Classification**](docs/Image_Classification.md)
[**Semantic Segmentation**](docs/Semantic_Segmentation.md)
[**Object Detection**](https://git.ti.com/cgit/jacinto-ai/pytorch-mmdetection/about/) - this link will take you to another repository, where we have our object detection training scripts.
[Depth Estimation](docs/Depth_Estimation.md)
[Motion Segmentation](docs/Motion_Segmentation.md)
[**Multi Task Estimation**](docs/Multi_Task_Learning.md)
Object Keypoint Estimation - coming soon..
[**Quantization**](docs/Quantization.md)
## Model Quantization
Quantization (especially 8-bit Quantization) is important to get best throughput for inference. Quantization can be done using either **Post Training Quantization (PTQ)** or **Quantization Aware Training (QAT)**.
[TI Deep Learning Library (TIDL)](https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/psdk_rtos_auto/docs/user_guide/sdk_components.html#ti-deep-learning-library-tidl) that is part of the [Processor SDK RTOS for Jacinto7](https://software-dl.ti.com/jacinto7/esd/processor-sdk-rtos-jacinto7/latest/exports/docs/psdk_rtos_auto/docs/user_guide/index.html) natively supports **PTQ** - TIDL can take floating point models and can quantize them using advanced calibration methods.
We have guidelines on how to choose models and how train them to get best accuracy with Quantization. It is unlikely that there will be significant accuracy drop with **PTQ** if these guidelines are followed. In spite of this, if there are models that have significant accuracy drop with quantization, it is possible to improve the accuracy using **QAT**. Please read more details in the documentation on **[Quantization](docs/Quantization.md)**.
## Additional Information
Some of the common training and validation commands are provided in shell scripts (.sh files) in the root folder.
Landing Page: [https://github.com/TexasInstruments/jacinto-ai-devkit](https://github.com/TexasInstruments/jacinto-ai-devkit)
Actual Git Repositories: [https://git.ti.com/jacinto-ai](https://git.ti.com/jacinto-ai)
Each of the repositories listed in the above link have an "about" tab with documentation and a "summary" tab with git clone/pull URLs.
## Acknowledgements
Our source code uses parts of the following open source projects. We would like to sincerely thank their authors for making their code bases publicly available.
|Module/Functionality |Parts of the code borrowed/modified from |
|----------------------------------|-------------------------------------------------------------------------------------|
|Datasets, Models |https://github.com/pytorch/vision, https://github.com/ansleliu/LightNet |
|Training, Validation Engine/Loops |https://github.com/pytorch/examples, https://github.com/ClementPinard/FlowNetPytorch |
|Object Detection |https://github.com/open-mmlab/mmdetection |
## License
Please see the [LICENSE](./LICENSE) file for more information about the license under which this code is made available.