Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Parametrized contrast #217

Open
wants to merge 4 commits into
base: main
Choose a base branch
from
Open

Parametrized contrast #217

wants to merge 4 commits into from

Conversation

xi
Copy link
Contributor

@xi xi commented Sep 10, 2022

This is based on #216 to avoid merge conflicts.

This branch contains two related changes: modifying Weber contrast and adding Stevens contrast. I would be happy to split this into two separate pull requests, but I wanted to first ask if general direction is something you are interested in.

Contrast depends a lot on viewing conditions. Therefore I think it makes sense to have parametrized contrast algorithms.

I propose to have two parametrized contrast algorithms: Weber (offset) and Stevens (exponent and offset).

Here's the issue: It would be great if we could give clear guidance on how to pick these parameters. Unfortunately there seems to be a bit of confusion around their exact physical interpretations and values.

What I call "offset" is called "Lambient" in the references Hwang/Peli paper. In other places it is called "flare". This interpretations is already criticised by the docs:

Widely used in accessibility checkers, the WCAG 2.1 algorithm is defined for sRGB only, and corresponds to Simple Contrast with a fixed 0.05 relative luminance increase to both colors to account for viewing flare.

This value is much higher than that in the sRGB standard, which puts white at 80 cd/m2 and black at 0.2cd/m2, a relative luminance boost of 0.0025.

This offset has a very different effect for Weber than for Steven though. Using an offset of 0.05 for Weber is quite similar to using an offset of 0.0025 and and exponent of 1/3 (used e.g. in CIELab and OkLab) for Stevens:

comparison of different parameters

Due to all this confusion, I think it might be best to stick with the neutral term "offset".

I understand if this is too experimental for you and you decide to reject this. However, I think it could be beneficial for the discussion to have the low level tools to experiment with different parameters, even if we don't know the best values yet.

@Myndex
Copy link
Contributor

Myndex commented Sep 20, 2022

Generalized comments here:

I'm not sure what @xi is up to, some of these things I covered over three years ago in the initial evaluations of existing contrast metrics, some of which were discussed in WCAG thread #695, and some were discussed elsewhere.

xi is, by his own admission, not familiar with vision or color science. The main exception I have is his insistence on "simplifying" math by discarding perceptual uniformity, which he does liberally at his repo.

Hwang/Peli

First, the Hwang/Peli paper is just a hypothesis, with no empirical data or study. The graphs are simply math assumptions derived a priori. Further, the Hwang/Peli paper is essentially just an asymmetric version of WCAG 2. It is also covered/protected by an issued patent.

Early on I experimented with these types of modifed Weber with various asymmetric offsets and scales, and found they did not match to measured perception of sRGB displays over the visual range (i.e. they are not particularly uniform).

S.S.Stevens

To Honor Fechner and Repeal His Law: A power function, not a log function, describes the operating characteristic of a sensory system. (S.S.Stevens, 1961)

....This offset has a very different effect for Weber than for Steven though. Using an offset of 0.05 for Weber is quite similar to using an offset of 0.0025 and and exponent of 1/3 (used e.g. in CIELab and OkLab) for Stevens:

This is a significant misunderstanding of the field here. First of all CIE L* already has an offset, and as a matter of comparison, ∆L* is very similar to WCAG 2 contrast in terms of the shape of the response curve relative to a set of color pairs that span the visual range.

L* is based on Munsell Value.

Stevens was not only an exponent of 0.333, he was one of the first to recognize the different curves due to spatial frequency and polarity.

Again, Tobias seems focused on abstract numerical comparisons, not vision science.

The graph of different curves he presents is not really meaningful, as nowhere does he present any of the available empirical data sets as a relative example, or the other accepted perceptual lightness curves (such as L*) so I am not certain of the point he is trying to make?

APCA

APCA traces partly back to Stevens, though also Fairchild's R-Lab, and many others. APCA adds several needed practical additions for modeling self-illuminated displays, including flare and black point compensation and polarity sensitivity, and is weighted for high spatial frequency stimuli (text).

However, I think it could be beneficial for the discussion to have the low level tools to experiment with different parameters, even if we don't know the best values yet.

These models were exercised and evaluated in 2019 here in the lab and also as part of the Visual Contrast Subgroup of Silver, and a lot of this linked/presented in the WCAG GitHub. The result was first the SAPC, and this led to the creation of the color matching experiments that provided the supra-threshold data set providing the basis for the APCA Lc curves. APCA's font/threshold guidelines are sourced from the research of Lovie-Kitchin et alia.

xi is not examining any empirical data at all, only his abstractions of the math. This despite the fact there is a wealth of available data sets for training models, and the existing models such as L*.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants