PyTorch version of the paper 'Enhanced Deep Residual Networks for Single Image Super-Resolution' (CVPRW 2017)
MIT License
About PyTorch 1.2.0
About PyTorch 1.1.0
This repository is an official PyTorch implementation of the paper "Enhanced Deep Residual Networks for Single Image Super-Resolution" from CVPRW 2017, 2nd NTIRE. You can find the original code and more information from here.
If you find our work useful in your research or publication, please cite our work:
[1] Bee Lim, Sanghyun Son, Heewon Kim, Seungjun Nah, and Kyoung Mu Lee, "Enhanced Deep Residual Networks for Single Image Super-Resolution," 2nd NTIRE: New Trends in Image Restoration and Enhancement workshop and challenge on image super-resolution in conjunction with CVPR 2017. [PDF] [arXiv] [Slide]
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
We provide scripts for reproducing all the results from our paper. You can train your model from scratch, or use a pre-trained model to enlarge your images.
Differences between Torch version
Clone this repository into any place you want.
git clone https://github.com/thstkdgus35/EDSR-PyTorch
cd EDSR-PyTorch
You can test our super-resolution algorithm with your images. Place your images in test
folder. (like test/<your_image>
) We support png and jpeg files.
Run the script in src
folder. Before you run the demo, please uncomment the appropriate line in demo.sh
that you want to execute.
cd src # You are now in */EDSR-PyTorch/src
sh demo.sh
You can find the result images from experiment/test/results
folder.
Model | Scale | File name (.pt) | Parameters | **PSNR |
---|---|---|---|---|
EDSR | 2 | EDSR_baseline_x2 | 1.37 M | 34.61 dB |
*EDSR_x2 | 40.7 M | 35.03 dB | ||
3 | EDSR_baseline_x3 | 1.55 M | 30.92 dB | |
*EDSR_x3 | 43.7 M | 31.26 dB | ||
4 | EDSR_baseline_x4 | 1.52 M | 28.95 dB | |
*EDSR_x4 | 43.1 M | 29.25 dB | ||
MDSR | 2 | MDSR_baseline | 3.23 M | 34.63 dB |
*MDSR | 7.95 M | 34.92 dB | ||
3 | MDSR_baseline | 30.94 dB | ||
*MDSR | 31.22 dB | |||
4 | MDSR_baseline | 28.97 dB | ||
*MDSR | 29.24 dB |
*Baseline models are in experiment/model
. Please download our final models from here (542MB)
**We measured PSNR using DIV2K 0801 ~ 0900, RGB channels, without self-ensemble. (scale + 2) pixels from the image boundary are ignored.
You can evaluate your models with widely-used benchmark datasets:
Set5 - Bevilacqua et al. BMVC 2012,
Set14 - Zeyde et al. LNCS 2010,
B100 - Martin et al. ICCV 2001,
Urban100 - Huang et al. CVPR 2015.
For these datasets, we first convert the result images to YCbCr color space and evaluate PSNR on the Y channel only. You can download benchmark datasets (250MB). Set --dir_data <where_benchmark_folder_located>
to evaluate the EDSR and MDSR with the benchmarks.
You can download some results from here.
The link contains EDSR+_baseline_x4 and EDSR+_x4.
Otherwise, you can easily generate result images with demo.sh
scripts.
We used DIV2K dataset to train our model. Please download it from here (7.1GB).
Unpack the tar file to any place you want. Then, change the dir_data
argument in src/option.py
to the place where DIV2K images are located.
We recommend you to pre-process the images before training. This step will decode all png files and save them as binaries. Use --ext sep_reset
argument on your first run. You can skip the decoding part and use saved binaries with --ext sep
argument.
If you have enough RAM (>= 32GB), you can use --ext bin
argument to pack all DIV2K images in one binary file.
You can train EDSR and MDSR by yourself. All scripts are provided in the src/demo.sh
. Note that EDSR (x3, x4) requires pre-trained EDSR (x2). You can ignore this constraint by removing --pre_train <x2 model>
argument.
cd src # You are now in */EDSR-PyTorch/src
sh demo.sh
Update log
Jan 04, 2018
Jan 09, 2018
src/data/MyImage.py
).Jan 16, 2018
Feb 21, 2018
Feb 23, 2018
Now PyTorch 0.3.1 is a default. Use legacy/0.3.0 branch if you use the old version.
With a new src/data/DIV2K.py
code, one can easily create new data class for super-resolution.
New binary data pack. (Please remove the DIV2K_decoded
folder from your dataset if you have.)
With --ext bin
, this code will automatically generate and saves the binary data pack that corresponds to previous DIV2K_decoded
. (This requires huge RAM (~45GB, Swap can be used.), so please be careful.)
If you cannot make the binary pack, use the default setting (--ext img
).
Fixed a bug that PSNR in the log and PSNR calculated from the saved images does not match.
Now saved images have better quality! (PSNR is ~0.1dB higher than the original code.)
Added performance comparison between Torch7 model and PyTorch models.
Mar 5, 2018
--precision half
to enable it. This does not degrade the output images.Mar 11, 2018
Mar 20, 2018
--ext sep-reset
to pre-decode large png files. Those decoded files will be saved to the same directory with DIV2K png files. After the first run, you can use --ext sep
to save time.--data_test Set5
to test your model on the Set5 images.Mar 29, 2018
MDSR_baseline_jpeg
model that suppresses JPEG artifacts in the original low-resolution image. Please use it if you have any trouble.MyImage
dataset is changed to Demo
dataset. Also, it works more efficient than before.Apr 9, 2018
Apr 26, 2018
July 22, 2018
code/demo.sh
to train/test those models.DIV2K/bin
folder that is created before this commit. Also, please avoid using --ext bin
argument. Our code will automatically pre-decode png images before training. If you do not have enough spaces(~10GB) in your disk, we recommend --ext img
(But SLOW!).Oct 18, 2018
--pre_train download
, pretrained models will be automatically downloaded from the server.--data_test video --dir_demo [video file directory]
.About PyTorch 1.0.0
--ext bin
is not supported. Also, please erase your bin files with --ext sep-reset
. Once you successfully build those bin files, you can remove -reset
from the argument.