# SOAT **Repository Path**: as85207/SOAT ## Basic Information - **Project Name**: SOAT - **Description**: Official PyTorch repo for StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN. - **Primary Language**: Unknown - **License**: MIT - **Default Branch**: main - **Homepage**: None - **GVP Project**: No ## Statistics - **Stars**: 0 - **Forks**: 0 - **Created**: 2021-11-29 - **Last Updated**: 2021-11-29 ## Categories & Tags **Categories**: Uncategorized **Tags**: None ## README # StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN ![](teaser.jpg) This is the PyTorch implementation of [StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN](https://arxiv.org/abs/2111.01619). [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/mchong6/SOAT/blob/main/infinity.ipynb) >**Abstract:**
Recently, StyleGAN has enabled various image manipulation and editing tasks thanks to the high-quality generation and the disentangled latent space. However, additional architectures or task-specific training paradigms are usually required for different tasks. In this work, we take a deeper look at the spatial properties of StyleGAN. We show that with a pretrained StyleGAN along with some operations, without any additional architecture, we can perform comparably to the state-of-the-art methods on various tasks, including image blending, panorama generation, generation from a single image, controllable and local multimodal image to image translation, and attributes transfer. ## How to use Everything to get started is in the [colab notebook](https://colab.research.google.com/github/mchong6/SOAT/blob/main/infinity.ipynb). ## Toonification For toonification, you can train a new model yourself by running ```bash python train.py ``` For disney toonification, we use the disney dataset [here](https://github.com/justinpinkney/toonify). Feel free to experiment with different datasets. ## Citation If you use this code or ideas from our paper, please cite our paper: ``` @article{chong2021stylegan, title={StyleGAN of All Trades: Image Manipulation with Only Pretrained StyleGAN}, author={Chong, Min Jin and Lee, Hsin-Ying and Forsyth, David}, journal={arXiv preprint arXiv:2111.01619}, year={2021} } ``` ## Acknowledgments This code borrows from [StyleGAN2 by rosalinity](https://github.com/rosinality/stylegan2-pytorch)