Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Please clarify fine-tuning instructions. #22

Open
ioctl-user opened this issue Jan 22, 2024 · 2 comments
Open

Please clarify fine-tuning instructions. #22

ioctl-user opened this issue Jan 22, 2024 · 2 comments
Assignees

Comments

@ioctl-user
Copy link

I am trying to use LED for my mobile phone camera.

Could you please clarify some questions about the documentation: https://github.com/Srameo/LED/blob/main/docs/demo.md

  1. How to choose best Pretrain checkpoint for the further training depending on user case?

  2. When preparing pairs, noisy image should be first in pair, right? If so, could you highlight this fact in the document?

  3. Could you note in the document, that the cutomized_denoiser.py script will create experiments/[TAG] directory itself?

  4. What recommendations concerning optimal fine-tuning pairs set? I.e. , is there any rules besides constant exposure value ratio? May be photo should contain clean vivid colors, one pair should be underexposed (to get "clear" noise) and one with normal exposition or so on?

p.s. For the mobile phone there is an app MotionCam , that allows create RAW photos with manual ISO and Exposition. Free version is enough for training purposes. The only problem with phone is attaching wireless pointer devices to avoid phone moving during preparing photos.

p.p.s. To control RAW exposition the following command can be used: exiftool *dng | grep -E -i "iso|shutter"

@Srameo
Copy link
Owner

Srameo commented Jan 22, 2024

  1. Normally, you can use a validation set to select the results. However, since collecting a validation set can be resource-intensive, you may simply take some inputs and then use qualitative indicators from these inputs to determine when the algorithm reaches its optimal solution. Note that different sensors may require different iterations for LED.

  2. Yeah, the training data should be in noise and noiseless pair. We will highlight this.

  3. Yes, we will nothe this in the doc.

  4. The recomendation for capturing the data could be found in the Sec 4.2 in our extended paper.

And thx for the tips! We would update these tips in our doc.

@ioctl-user
Copy link
Author

ioctl-user commented Jan 23, 2024

Thank you for comments! Some additional questions:

1 . There are too many of unknown variables for a common user, that starting training: Pretrain dataset selection, fine-tune pairs preparing etc. May one assume, that finding the best "Deploy" model for his RAW photos means that user should get corresponding "Pretrain" model?

4 . So, vivid colors are desirable for calibration. In addition could you please clarify Fig. 11 and instruction text: "To customize your denoiser, we need paired data of different ISOs at the same ratio."

@Srameo Srameo self-assigned this Mar 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants