-
-
Notifications
You must be signed in to change notification settings - Fork 9.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Nan are converted to int with slicing #4592
Comments
Not only when slicing:
|
@gerritholl What would you expect? |
@charris Expect, not sure, but I would desire a warning or error configurable with |
A slice of integer ndarray allows setting numpy.nan while ndarry.__setitem__() disallows it. See numpy/numpy#4592 .
Avoid a segfault caused by a numpy bug: numpy/numpy#4592
Another way to stumble on this issue:
FYI on ARMv8:
|
ARMv8 hardware is used in our CI matrix so if there is a fix / some kind of action to be taken we should be able to detect regressions if a test is added. |
@figiel's example seems very surprising. Looking a little further:
So there is an odd type dependence. Possibly in the issue on top, the difference is that for the slice case the constant |
Related to #6109 |
It seems the only case where numpy needs a consistent definition for this otherwise undefined conversion is NaN -> NaT. I've got tests running for a patch to fix this for aarch64 since x86 just happens to do the conversion correctly to INT64_MIN. |
This still happens in 1.18.1. As an additional comment, note advanced indexing (not just single-element indexing) also produces an error. As I see it, the semantics of assigning NaN to an integer array should be consistently defined, to either convert to the minimum value always or raise an error always, but the current behaviour can be quite surprising. |
Update: The original issue was that
However (probably because of the type dependence that @mhvk pointed out above), this does not produce an error:
|
Also xref gh-17495 and almost a duplicate of gh-6109, although more of a focus on the differences caused by casting vs. element setting. The main reason is the fact that we mix casting ( The one problem here (currently): item setting, doesn't know about cast safety, etc. So it cannot try to imitate normal casting. It basically uses some cast-safety that casting doesn't know about anyway. Since assignments are always unsafe, but choose to error on particularly nonsensical conversions. |
NumPy will now give a warning on the main branch (settable using Otherwise, closing this issue, since I think the warning is a good step in the right direction (and I am not sure whether more will be feasible, especially in the forseeable future). |
Related to #1578, nan's are still converted to -maxint when it is assigned by slicing.
The text was updated successfully, but these errors were encountered: