Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Gaborish filter regression #3458

Open
yota-code opened this issue Apr 5, 2024 · 3 comments
Open

Gaborish filter regression #3458

yota-code opened this issue Apr 5, 2024 · 3 comments
Labels
enhancement New feature or request regression unrelated to 1.0 Things that need not be done before the 1.0 version milestone

Comments

@yota-code
Copy link

yota-code commented Apr 5, 2024

Describe the bug

As noted by eddie.sato on this message https://discord.com/channels/794206087879852103/803645746661425173/1225342521014485103
The generation loss of the jxl exibit an issue which might be induced by a regression on the gaborish filter (encoding part, as stated by _wb_)

To Reproduce

Encode and decode the same image twice in a row

Expected behavior

The second encoding should insert a minimal distortion

Screenshots

genloss.webm

Environment

Additional context

cf. the dicussion following the post on discord:

_wb_ — Looks like jpegli zeroes AC quite quickly. And inv_gaborish(gaborish) should probably be made closer to the identity function.
_wb_ — [...] the encoder side gaborish has been changed since then, I suspect causing a regression in terms of generation loss.

@niutech
Copy link

niutech commented Apr 7, 2024

Realistically, who is going to save the same image in the same format 10+ times? If you are a photo retoucher, you use an intermediate format, such as PSD, PSB, KRA, XCF, TIFF.

@yota-code
Copy link
Author

yota-code commented Apr 8, 2024

Indeed, the use case is useless but:

  • This regression may hide a quality issue, gaborish and gaborish inverse filters should match as much as possible (it can be since it was)
  • low generation loss was an argument put forward since the first releases of the format. It will be benchmarked by supporters and detractors.

if the origin of the issue is a regression, everybody could benefit from correcting it

@jonsneyers
Copy link
Member

Repeated recompression does occur in practice. Memes typically get shared in lossy formats, and can get recompressed dozens of times because:

  • the image goes from one social media platform to the other, getting recompressed every time
  • the image gets copy/pasted, which often has the effect of copying the decoded pixels, not the original compressed format
  • the image gets edited (e.g. the photo remains the same but the caption changes)

When I look at some of the memes I encounter in the wild, they do exhibit the kind of artifacts corresponding to dozens or even hundreds of generations.

Also if we want to use very high quality lossy as a plausible alternative to lossless in capturing/authoring workflows (e.g. using lossy JXL payloads in DNG), I would argue that avoiding generation loss is one of the key requirements.

So I think this issue should be investigated and I think we should re-tune the encoder-side Gaborish to optimize not the first generation but the Nth one.

@mo271 mo271 added enhancement New feature or request regression unrelated to 1.0 Things that need not be done before the 1.0 version milestone labels Apr 10, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request regression unrelated to 1.0 Things that need not be done before the 1.0 version milestone
Projects
None yet
Development

No branches or pull requests

4 participants