# openmixup
**Repository Path**: frontxiang/openmixup
## Basic Information
- **Project Name**: openmixup
- **Description**: https://github.com/Westlake-AI/openmixup
- **Primary Language**: Python
- **License**: Apache-2.0
- **Default Branch**: main
- **Homepage**: None
- **GVP Project**: No
## Statistics
- **Stars**: 1
- **Forks**: 0
- **Created**: 2022-11-12
- **Last Updated**: 2025-08-30
## Categories & Tags
**Categories**: Uncategorized
**Tags**: None
## README
# OpenMixup
[](https://github.com/Westlake-AI/openmixup/releases)
[](https://pypi.org/project/openmixup)
[](https://arxiv.org/abs/2209.04851)
[](https://openmixup.readthedocs.io/en/latest/)
[](https://github.com/Westlake-AI/openmixup/blob/main/LICENSE)
[](https://github.com/Westlake-AI/openmixup/issues)
[📘Documentation](https://openmixup.readthedocs.io/en/latest/) |
[🛠️Installation](https://openmixup.readthedocs.io/en/latest/install.html) |
[🚀Model Zoo](https://github.com/Westlake-AI/openmixup/tree/main/docs/en/model_zoos) |
[👀Awesome Mixup](https://openmixup.readthedocs.io/en/latest/awesome_mixups/Mixup_SL.html) |
[🔍Awesome MIM](https://openmixup.readthedocs.io/en/latest/awesome_selfsup/MIM.html) |
[🆕News](https://openmixup.readthedocs.io/en/latest/changelog.html)
## Introduction
The main branch works with **PyTorch 1.8** (required by some self-supervised methods) or higher (we recommend **PyTorch 1.12**). You can still use **PyTorch 1.6** for supervised classification methods.
`OpenMixup` is an open-source toolbox for supervised, self-, and semi-supervised visual representation learning with mixup based on PyTorch, especially for mixup-related methods. *Recently, `OpenMixup` is on updating to adopt new features and code structures of OpenMMLab 2.0 ([#42](https://github.com/Westlake-AI/openmixup/issues/42)).*
Major Features
- **Modular Design.**
OpenMixup follows a similar code architecture of OpenMMLab projects, which decompose the framework into various components, and users can easily build a customized model by combining different modules. OpenMixup is also transplantable to OpenMMLab projects (e.g., [MMPreTrain](https://github.com/open-mmlab/mmpretrain)).
- **All in One.**
OpenMixup provides popular backbones, mixup methods, semi-supervised, and self-supervised algorithms. Users can perform image classification (CNN & Transformer) and self-supervised pre-training (contrastive and autoregressive) under the same framework.
- **Standard Benchmarks.**
OpenMixup supports standard benchmarks of image classification, mixup classification, self-supervised evaluation, and provides smooth evaluation on downstream tasks with open-source projects (e.g., object detection and segmentation on [Detectron2](https://github.com/facebookresearch/maskrcnn-benchmark) and [MMSegmentation](https://github.com/open-mmlab/mmsegmentation)).
- **State-of-the-art Methods.**
Openmixup provides awesome lists of popular mixup and self-supervised methods. OpenMixup is updating to support more state-of-the-art image classification and self-supervised methods.
Table of Contents
- Introduction
- News and Updates
- Installation
- Getting Started
- Overview of Model Zoo
- Change Log
- License
- Acknowledgement
- Contributors
- Contributors and Contact
## News and Updates
[2025-03-19] `OpenMixup` v0.2.10 is released, supporting **PyTorch >= 2.0** and more mixup augmentations and networks.
## Installation
OpenMixup is compatible with **Python 3.6/3.7/3.8/3.9** and **PyTorch >= 1.6**. Here are quick installations for installation in the development mode:
```shell
conda create -n openmixup python=3.8 pytorch=1.12 cudatoolkit=11.3 torchvision -c pytorch -y
conda activate openmixup
pip install openmim
mim install mmcv-full
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
python setup.py develop
```
Installation with PyTorch 2.x requiring different processes.
```bash
conda create -n openmixup python=3.9
conda activate openmixup
pip install torch==2.1.0 torchvision==0.16.0 torchaudio==2.1.0 --index-url https://download.pytorch.org/whl/cu118
pip install https://download.openmmlab.com/mmcv/dist/cu118/torch2.1.0/mmcv_full-1.7.2-cp39-cp39-manylinux1_x86_64.whl
git clone https://github.com/Westlake-AI/openmixup.git
cd openmixup
pip install -r requirements/runtime.txt
python setup.py develop
```
Fore more detailed installation and dataset preparation, please refer to [install.md](docs/en/install.md).
## Getting Started
OpenMixup supports Linux and macOS. It enables easy implementation and extensions of mixup data augmentation methods in existing supervised, self-, and semi-supervised visual recognition models. Please see [get_started.md](docs/en/get_started.md) for the basic usage of OpenMixup.
### Training and Evaluation Scripts
Here, we provide scripts for starting a quick end-to-end training with multiple `GPUs` and the specified `CONFIG_FILE`.
```shell
bash tools/dist_train.sh ${CONFIG_FILE} ${GPUS} [optional arguments]
```
For example, you can run the script below to train a ResNet-50 classifier on ImageNet with 4 GPUs:
```shell
CUDA_VISIBLE_DEVICES=0,1,2,3 PORT=29500 bash tools/dist_train.sh configs/classification/imagenet/resnet/resnet50_4xb64_cos_ep100.py 4
```
After training, you can test the trained models with the corresponding evaluation script:
```shell
bash tools/dist_test.sh ${CONFIG_FILE} ${GPUS} ${PATH_TO_MODEL} [optional arguments]
```
### Development
Please see [Tutorials](docs/en/tutorials) for more developing examples and tech details:
- [config files](docs/en/tutorials/0_config.md)
- [add new dataset](docs/en/tutorials/1_new_dataset.md)
- [data pipeline](docs/en/tutorials/2_data_pipeline.md)
- [add new modules](docs/en/tutorials/3_new_module.md)
- [customize schedules](docs/en/tutorials/4_schedule.md)
- [customize runtime](docs/en/tutorials/5_runtime.md)
Downetream Tasks for Self-supervised Learning
- [Classification](docs/en/tutorials/ssl_classification.md)
- [Detection](docs/en/tutorials/ssl_detection.md)
- [Segmentation](docs/en/tutorials/ssl_segmentation.md)
Useful Tools
- [Analysis](docs/en/tutorials/analysis.md)
- [Visualization](docs/en/tutorials/visualization.md)
- [pytorch2onnx](docs/en/tutorials/pytorch2onnx.md)
- [pytorch2torchscript](docs/en/tutorials/pytorch2torchscript.md)
(back to top)
## Overview of Model Zoo
Please run experiments or find results on each config page. Refer to [Mixup Benchmarks](docs/en/mixup_benchmarks) for benchmarking results of mixup methods. View [Model Zoos Sup](docs/en/model_zoos/Model_Zoo_sup.md) and [Model Zoos SSL](docs/en/model_zoos/Model_Zoo_selfsup.md) for a comprehensive collection of mainstream backbones and self-supervised algorithms. We also provide the paper lists of [Awesome Mixups](docs/en/awesome_mixups) and [Awesome MIM](docs/en/awesome_selfsup/MIM.md) for your reference. Please view config files and links to models at the following config pages. Checkpoints and training logs are on updating!
|
Supported Backbone Architectures
|
Mixup Data Augmentations
|
|
|
|
|
Self-supervised Learning Algorithms
|
Supported Datasets
|
|
|
|
(back to top)
## Change Log
Please refer to [changelog.md](docs/en/changelog.md) for more details and release history.
## License
This project is released under the [Apache 2.0 license](LICENSE). See `LICENSE` for more information.
## Acknowledgement
- OpenMixup is an open-source project for mixup methods and visual representation learning created by researchers in **CAIRI AI Lab**. We encourage researchers interested in backbone architectures, mixup augmentations, and self-supervised learning methods to contribute to OpenMixup!
- This project borrows the architecture design and part of the code from [MMPreTrain](https://github.com/open-mmlab/mmpretrain) and the official implementations of supported algorisms.
(back to top)
## Citation
If you find this project useful in your research, please consider star `OpenMixup` or cite our [tech report](https://arxiv.org/abs/2209.04851):
```BibTeX
@article{li2022openmixup,
title = {OpenMixup: A Comprehensive Mixup Benchmark for Visual Classification},
author = {Siyuan Li and Zedong Wang and Zicheng Liu and Di Wu and Cheng Tan and Stan Z. Li},
journal = {ArXiv},
year = {2022},
volume = {abs/2209.04851}
}
```
(back to top)
## Contributors and Contact
For help, new features, or reporting bugs associated with OpenMixup, please open a [GitHub issue](https://github.com/Westlake-AI/openmixup/issues) and [pull request](https://github.com/Westlake-AI/openmixup/pulls) with the tag "help wanted" or "enhancement". For now, the direct contributors include: Siyuan Li ([@Lupin1998](https://github.com/Lupin1998)), Zedong Wang ([@Jacky1128](https://github.com/Jacky1128)), and Zicheng Liu ([@pone7](https://github.com/pone7)). We thank all public contributors and contributors from MMPreTrain (MMSelfSup and MMClassification)!
This repo is currently maintained by:
- Siyuan Li (lisiyuan@westlake.edu.cn), Westlake University
- Zedong Wang (wangzedong@westlake.edu.cn), Westlake University
- Zicheng Liu (liuzicheng@westlake.edu.cn), Westlake University
(back to top)