Skip to content

Latest commit

 

History

History

mipi_starting_kit

Folders and files

NameName
Last commit message
Last commit date

parent directory

..
 
 
 
 
 
 
 
 

ICCV23_LED_LOGO

Starting-kit for Few-shot RAW Image Denosing @MIPI2024

[Homepage] [Codalab]

Overview

The Few-shot RAW Image Denoising track is geared towards training neural networks for raw image denoising in scenarios where paired data is limited.

In this starting kit, we will provide you with a possible solution, but you don't have to follow this approach.

Additionally, we will also provide you with tips on important considerations during the competition and the submission process.

In the code_example/tutorial.ipynb, we provide examples and notes on reading data, lite ISP, calculating scores, and submission.

In the evaluate, we provide the validation code that we submitted on Codalab.

Tips

  • You are NOT restricted to train their algorithms only on the provided dataset. Other PUBLIC dataset can be used as well. However, you need to mention in the final submitted factsheet what public datasets you have used.
  • For different cameras, you can test using different neural network weights.
  • Please ensure that your testing process can be conducted on a single NVIDIA RTX 3090 (i.e., the memory usage needs to be less than 24GB). This is to limit resource usage during deployment.
  • We will check the participants' code after the final test stage to ensure fairness.

Possible Solution

A viable solution is to train following the pre-train and fine-tune strategy in LED.

During the pre-train phase, you can use other public datasets, and ultimately fine-tune on the data we provide.

We offer a config file in option_example/finetune.yaml that can be used with the LED codebase.

Of course, you are NOT restricted to using this approach.

Related Dataset

The section where we provide relevant datasets serves two purposes:

  • You can use the clean RAW images included for data synthesis and pre-training of neural networks.
  • Since the provided data only includes training data, we recommend that you first test your algorithm on the following two datasets. Afterwards, choose a finalized approach to perform validation/test on Codalab.
Dataset 🔗 Source Conf. Shot on CFA Pattern
SID [Homepage][Github][Dataset (Google Drive / Baidu Clould)] CVPR 2018 Sony A7S2 Bayer (RGGB)
ELD [Github][Google Drive][Dataset (Google Drive / Baidu Clould)] CVPR 2020 Sony A7S2 / Nikon D850 / Canon EOS70D / Canon EOS700D Bayer