Skip to content

Latest commit

 

History

History
196 lines (164 loc) · 7.84 KB

benchmark.md

File metadata and controls

196 lines (164 loc) · 7.84 KB

Training and Evaluation

For training and evaluation, you must install the customized rawpy package provided in ELD.
If you have installed the environment using our install.sh script, you can ignore this message.
Otherwise, please refer to install.md for further instructions.

Data Preparation

Dataset 🔗 Source Conf. Shot on CFA Pattern
SID [Homepage][Github][Dataset (Google Drive / Baidu Clould)] CVPR 2018 Sony A7S2 Bayer (RGGB)
ELD [Github][Google Drive][Dataset (Google Drive / Baidu Clould)] CVPR 2020 Sony A7S2 / Nikon D850 / Canon EOS70D / Canon EOS700D Bayer

After download all the above datasets, you could symbol link them to the dataset folder.

mkdir datasets/ICCV23-LED && cd datasets/ICCV23-LED
ln -s your/path/to/SID/Sony  ./Sony
ln -s your/path/to/ELD       ./ELD_sym

Or just put them directly in the datasets/ICCV23-LED folder.

Acceleration for Training and Testing

If you have symbol link the data into datasets/ICCV23-LED like the aforementioned steps, you could just run

bash scripts/data_preparation/accelerate.sh

for all the data acceleration.

To figure out what the script is actually doing, please continue reading.

Training

Like all other method, we use the long subset of SID Sony dataset for training. For fast trainine, we crop the data in advance.

# Extract patch for training.
python scripts/data_preparation/extract_bayer_subimages_with_metadata.py \
    --data-path datasets/ICCV23-LED/Sony/long \
    --save-path datasets/ICCV23-LED/Sony_train_long_patches \
    --suffix ARW \
    --n-thread 10

Testing

First, we convert the ELD data into SID data structure for use the same dataset class.

# Convert the ELD data into SID data structure
python scripts/data_preparation/eld_to_sid_structure.py \
    --data-path datasets/ICCV23-LED/ELD_sym \
    --save-path datasets/ICCV23-LED/ELD

Then for testing, we convert all the RAW files into numpy array format.

# convert SID SonyA7S2
python scripts/data_preparation/bayer_to_npy.py --data-path datasets/ICCV23-LED/Sony --save-path datasets/ICCV23-LED/Sony_npy --suffix ARW --n-thread 8
# convert ELD SonyA7S2
python scripts/data_preparation/bayer_to_npy.py --data-path datasets/ICCV23-LED/ELD/SonyA7S2 --save-path datasets/ICCV23-LED/ELD_npy/SonyA7S2 --suffix ARW --n-thread 8
# convert ELD NikonD850
python scripts/data_preparation/bayer_to_npy.py --data-path datasets/ICCV23-LED/ELD/NikonD850 --save-path datasets/ICCV23-LED/ELD_npy/NikonD850 --suffix nef --n-thread 8
# convert ELD CanonEOS70D
python scripts/data_preparation/bayer_to_npy.py --data-path datasets/ICCV23-LED/ELD/CanonEOS70D --save-path datasets/ICCV23-LED/ELD_npy/CanonEOS70D --suffix CR2 --n-thread 8
# convert ELD CanonEOS700D
python scripts/data_preparation/bayer_to_npy.py --data-path datasets/ICCV23-LED/ELD/CanonEOS700D --save-path datasets/ICCV23-LED/ELD_npy/CanonEOS700D --suffix CR2 --n-thread 8

Download the Data Pair List

The summay of the data pair list can be found in Google Drive.

Since commit fadffc7, the data pair list for benchmark has been added in datasets/txtfiles.

Like SID, we use txt files to identify the images for training or testing.
To evalute or train LED using our proposed code, you should download the corresponding txt file and put them in the right place. Or change the data_pair_list property in dataset:train:dataroot option.

Visualization

Since commit fadffc7, the EMoR data for fast visualization has been added in datasets/EMoR.

Download the EMoR files calibrated by ELD in Google Drive or Baidu Clould for fast visualization using GPU.

Pretrained Models

We provide a model zoo for reproduce our LED. Please refer to #Network for Benchmark section in pretrained-models.md.

Training

We have provided abundant configs for you to reproduce most of the metrics in our paper! Just select a config and run:

python led/train.py -opt [OPT]

To learn more about the config, please move to develop.md.

Evaluation

We have provided a script for fast evaluation:

python scripts/benckmark.py \
    -t [TAG] \
    -p [PRETRAINED_NET] \
    --dataset [DATASET] [CAMERA_MODEL] \
    [--led] \         # If the model is fine-tuned and deployed by our LED method.
    [--save_img]      # If you would like to save the result

# e.g.
python scripts/benckmark.py \
    -t test \
    -p pretrained/network_g/LED_Deploy_SID_SonyA7S2_CVPR20_Setting_Ratio100-300.pth \
    --dataset SID SonyA7S2 \
    --led --save_img
# the log and visualization can be found in `results/test`