Skip to content

Official repository of the Fried Rice Lab, including code resources of the following works: ESWT [arXiv], etc. This repository also implements many useful features and out-of-the-box image restoration models.

License

Notifications You must be signed in to change notification settings

vasttiankliu/FriedRiceLab

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

2 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Fried Rice Lab

We will release the code resources of our works here. We have also implemented many useful features and out-of-the-box image restoration models. We hope this will help you in your work.

This repository is mainly built on BasicSR.

FRL News

23.01.26 Release the code resources of ESWT

23.01.11 FRL code v2.0 has been released πŸŽ‰

22.11.15 Here we are πŸͺ§

Our Works

(ESWT) Image Super-Resolution using Efficient Striped Window Transformer [arXiv]

Jinpeng Shi*^, Hui Li, Tianle Liu, Yulong Liu, Mingjian Zhang, Jinchen Zhu, Ling Zheng, Shizhuang Weng^

Recently, transformer-based methods have made impressive progress in single-image super-resolution (SR). However, these methods are difficult to apply to lightweight SR (LSR) due to the challenge of balancing model performance and complexity. In this paper, we propose an efficient striped window transformer (ESWT). ESWT consists of efficient transformation layers (ETLs), allowing a clean structure and avoiding redundant operations. Moreover, we designed a striped window mechanism to obtain a more efficient ESWT in modeling long-term dependencies. To further exploit the potential of the transformer, we propose a novel flexible window training strategy. Without any additional cost, this strategy can further improve the performance of ESWT. Extensive experiments show that the proposed method outperforms state-of-the-art transformer-based LSR methods with fewer parameters, faster inference, smaller FLOPs, and less memory consumption, achieving a better trade-off between model performance and complexity. [More details and reproduction guidance]

*: First/Co-first author

^: Corresponding author

How to Use

1 Preparation

1.1 Environment

Use the following command to build the Python environment:

conda create -n frl python
conda activate frl
pip config set global.index-url https://pypi.tuna.tsinghua.edu.cn/simple # Mainland China only!
pip install torch torchvision basicsr einops timm matplotlib

1.2 Dataset

You can download the datasets you need from our OneDrive and place the downloaded datasets in the folder datasets. To use the YML profile we provide, keep the local folder datasets in the same directory tree as the OneDrive folder datasets.

Task Dataset Relative Path
SISR DF2K datasets/sr_data/DF2K
Set5 datasets/sr_data/Set5
Set14 datasets/sr_data/Set14
BSD100 datasets/sr_data/BSD100
Urban100 datasets/sr_data/Urban100
Manga109 datasets/sr_data/Manga109
Denoising SIDD datasets/denoising_data/SIDD

🀠 All datasets have been processed in IMDB format and do not require any additional processing. The processing of the SISR dataset refers to the BasicSR document, and the processing of the denoising dataset refers to the NAFNet document.

🀠 To verify the integrity of your download, please refer to docs/md5.txt.

1.3 Pretraining Weight

You can download the pretraining weights you need from our OneDrive and place the downloaded pretraining weights in the folder modelzoo. To use the YML configuration files we provide, keep the local folder modelzoo in the same directory tree as the OneDrive folder modelzoo.

Source Model Relative Path
Official ESWT modelzoo/ESWT
Unofficial ELAN modelzoo/ELAN

🀠 The unofficial pre-trained weights are trained by us. The experimental conditions are exactly the same as in their paper.

2 Run

When running the FRL code, unlike BasicSR, you must specify two YML configuration files. The run command should be as follows:

python ${function.py} -expe_opt ${expe.yml} -task_opt ${task.yml}
  • ${function.py} is the function you want to run, e.g. test.py
  • ${expe.yml} is the path to the experimental YML configuration file that contains the model-related and training-related configuration, e.g. expe/ESWT/ESWT_LSR.yml
  • ${task.yml} is the path to the task YML configuration file that contains the task-related configuration, e.g. expe/task/LSR_x4.yml

🀠 A complete experiment consists of three parts: the data, the model, and the training strategy. This design allows their configuration to be decoupled.

For your convenience, we provide a demo test set datasets/demo_data/Demo_Set5 and a demo pre-training weight modelzoo/ELAN/ESWT-24-6_LSR_x4.pth. Use the following commands to try out the main functions of the FRL code.

2.1 Train

This function will train a specified model.

python train.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

🀠 Use the following demo command instead if you prefer to run in CPU mode:

python train.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml --force_yml num_gpu=0

2.2 Test

This function will test the performance of a specified model on a specified task.

python test.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

2.3 Analyse

This function will analyze the complexity of a specified model on a specified task. Including the following metrics:

  • #Params: total number of learnable parameters

  • #FLOPs: abbreviation of floating point operations

  • #Acts: number of elements of all outputs of convolutional layers

  • #Conv: number of convolutional layers

  • #Memory: maximum GPU memory consumption when inferring a dataset

  • #Ave. Time: average inference time per image in a dataset

python analyse.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

⚠️ The #Ave. Time result of the first dataset is incorrect (higher than the real value). We are working on it.

2.4 Interpret

This function comes from the paper "Interpreting Super-Resolution Networks with Local Attribution Maps". When reconstructing the patches marked with red boxes, a higher DI indicates involving a larger range of contextual information, and a darker color indicates a higher degree of contribution.

python interpret.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

2.5 Infer

You can use this function to restore your own image.

python infer.py -expe_opt options/repr/ESWT/ESWT-24-6_LSR.yml -task_opt options/task/LSR_x4.yml

Useful Features

Data Flow

The image restoration process based on the BasicSR is as follows:

              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚  image  │───────────────▢│  model  │────────────────▢│  image  β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ pre-processing β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ post-processing β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

By default, the pre-processing operation normalizes the input image of any bit range (e.g., an 8-bit RGB image) to the [0, 1] range, and the post-processing operation restores the output image to the original bit range. The default data flow is shown below:

              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚ 0-2^BIT │───────────────▢│   0-1   │────────────────▢│ 0-2^BIT β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

However, for some input images (e.g., 16-bit TIF images), this data flow may lead to unstable training or degraded performance. Therefore, the FRL code provides support for data flows of any bit range. The new data flows are shown below:

              β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”                 β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”
              β”‚ 0-2^BIT │───────────────▢│ 0-2^bit │────────────────▢│ 0-2^BIT β”‚
              β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜                 β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

You can try different data flows by simply changing the parameter bit in the file ${expe.yml}. Set it to 0 to use the default data flow of BasicSR.

🀠 We tested the impact of different data flows on the SISR task (by retraining the EDSR, RFDN, and ELAN models using 8-bit RGB images). The results show that 8-bit models (trained with 8-bit data flow) perform slightly better than 0-bit models.

⚠️ We did not test the impact of different data flows on other image restoration tasks.

⚠️ Using new data flows may lead to inaccurate metric results (PSNR: error less than 0.001; SSIM: error less than 0.00001). To get more accurate metric results, use scripts/evaluate.m instead.

LMDN Loading

A standard BasicSR LMDB database structure is as follows:

                                        demo.lmdb
                                        β”œβ”€β”€ data.mdb
                                        β”œβ”€β”€ lock.mdb
                                        └── meta_info.txt

By default, BasicSR automatically reads the file demo.lmdb/meta_info.txt when loading the LMDB database. In the FRL code, you can specify the file meta_info.txt to be used when loading the LMDB database. This makes it easier to process datasets, such as splitting a dataset into a training set and a test set.

🀠 The LMDB database of BasicSR has a unique form. More information about LMBD database and file mateinfo.txt can be found in the BasicSR document.

Model Customization

Different from BasicSR, all models in the FRL code must have the following four parameters:

  • upscale: upscale factor, e.g: 2, 3, and 4 for lsr task, 1 for denoising task
  • num_in_ch: input channel number
  • num_out_ch: output channel number
  • task: image restoration task, e.g: lsr, csr or denoising

A demo model implementation is as follows:

import torch


class DemoModel(torch.nn.Module):
    def __init__(self, upscale: int, num_in_ch: int, num_out_ch: int, task: str,  # noqa
                 num_groups: int, num_blocks: int, *args, **kwargs) -> None:  # noqa
        super().__init__()

    def forward(self, x: torch.Tensor) -> torch.Tensor:
        pass

🀠 The FRL code automatically assigns values to these parameters based on the task configuration file used, so you do not need to define them in the parameter network_g.

network_g:
   type: DemoModel
   # Only the following parameters are required!
   num_groups: 20
   num_blocks: 10

Out-Of-The-Box Models

Standing on the shoulders of giants allows us to grow quickly. So, we implemented many out-of-the-box image restoration models that may help your work. Please refer to the folder archs and the folder options/expe for more details.

You can use the following command out of the box!

# train EDSR to solve 2x classic super-resolution task
python train.py -expe_opt options/expe/EDSR/EDSR_CSR.yml -task_opt options/task/CSR_x2.yml

# test the performance of IMDN on 3x lightweight super-resolution task
python test.py -expe_opt options/expe/IMDN/IMDN_LSR.yml -task_opt options/task/LSR_x3.yml

# analyse the complexity of RFDN on 4x classic lightweight super-resolution task
python analyse.py -expe_opt options/expe/RFDN/RFDN_LSR.yml -task_opt options/task/LSR_x4.yml

We provide many experimental and task YML configuration files. To perform different experiments, feel free to combine them in the command.

🀠 If these implementations help your work, please consider citing them.

Acknowledgements

This code is mainly based on BasicSR. We thank its developers for creating such a useful toolbox. The code of the function analyse is based on NTIRE2022 ESR, and the code of the function interpret is based on LAM. All other image restoration model codes are from their official GitHub. More details can be found in their implementations.

Contact

This repository is maintained by Jinpeng Shi (jinpeeeng.s@gmail.com). Special thanks to Tianle Liu (tianle.l@outlook.com) for his excellent work on code testing. Due to our limited capacity, we welcome any PR.

About

Official repository of the Fried Rice Lab, including code resources of the following works: ESWT [arXiv], etc. This repository also implements many useful features and out-of-the-box image restoration models.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 96.4%
  • MATLAB 3.2%
  • Shell 0.4%