-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
lerp()'s implementation has a loss of precision when inputs span multiple orders of magnitude #275
Comments
Sorry, I've been meaning to get to this. I've been mulling over if changing the default implementation would cause anyone issues. Allegedly the main advantage of the method glam is using is it is monotonic, whereas apparently the precise version is not, see https://en.wikipedia.org/wiki/Linear_interpolation#Programming_language_support. I think if I am to merge your PR I will bump the glam version number so people don't get a surprise change in behaviour. The other option I guess would be to add a second "precise" lerp method, I think I'd rather not do that though. |
Another thing to consider with changing the current implementation is currently Perhaps it would be better to offer a separate |
I just noticed that
Can you elaborate on what you imply with this? Since the "precise" implementation will also extrapolate outside the range of |
The general implementation of
lerp
in glam for floating point types takes the form as follows:However, this presents floating point precision issues if
a
andb
have very different exponent values due to the(b - a)
in the computation. You can test this vialerp(-16.0e30, 16.0, 1.0)
returning 0.0 instead of the correct 16.0. It may be more accurate to use the following form:This may not be a significant regression in performance as floating point multiplication is generally much faster than addition.
The text was updated successfully, but these errors were encountered: