# mt3 **Repository Path**: unidream/mt3 ## Basic Information - **Project Name**: mt3 - **Description**: MT3: Multi-Task Multitrack Music Transcription - **Primary Language**: Unknown - **License**: Apache-2.0 - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2026-04-01 - **Last Updated**: 2026-04-02 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # MT3: Multi-Task Multitrack Music Transcription MT3 is a multi-instrument automatic music transcription model that uses the [T5X framework](https://github.com/google-research/t5x). This is not an officially supported Google product. ## Transcribe your own audio Use our [colab notebook](https://colab.research.google.com/github/magenta/mt3/blob/main/mt3/colab/music_transcription_with_transformers.ipynb) to transcribe audio files of your choosing. You can use a pretrained checkpoint from either a) the piano transcription model described in [our ISMIR 2021 paper](https://archives.ismir.net/ismir2021/paper/000030.pdf) or b) the multi-instrument transcription model described in [our ICLR 2022 paper](https://openreview.net/pdf?id=iMSjopcOn0p). ## Train a model For now, we do not (easily) support training. If you like, you can try to follow the [T5X training instructions](https://github.com/google-research/t5x#training) and use one of the tasks defined in [tasks.py](mt3/tasks.py).