Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove np.bool8 support (deprecated) #6633

Closed

Conversation

bmorris3
Copy link
Contributor

Description

numpy/numpy#22607 deprecated use of the np.bool8 dtype. This PR simply removes the relevant bits of skimage that concern that dtype, so deprecation warnings don't cause downstream packages to run into deprecation warnings/errors.

This was discovered in dev-dependency test logs while working on spacetelescope/jdaviz#1834.

For reviewers

  • Check that the PR title is short, concise, and will make sense 1 year
    later.
  • Check that new functions are imported in corresponding __init__.py.
  • Check that new features, API changes, and deprecations are mentioned in
    doc/release/release_dev.rst.
  • There is a bot to help automate backporting a PR to an older branch. For
    example, to backport to v0.19.x after merging, add the following in a PR
    comment: @meeseeksdev backport to v0.19.x
  • To run benchmarks on a PR, add the run-benchmark label. To rerun, the label
    can be removed and then added again. The benchmark output can be checked in
    the "Actions" tab.

@bmorris3
Copy link
Contributor Author

bmorris3 commented Nov 28, 2022

One remaining warning must be resolved, which stems from upstream in scipy, and has been fixed in scipy/scipy#17467.

Comment on lines 116 to 120
def test_very_large_labels():
imax = np.iinfo(np.int64).max
imax = np.iinfo(np.int32).max
labels = np.array([0, 1, imax, 42, 42], dtype=np.int64)
output, fw, inv = relabel_sequential(labels, offset=imax)
assert np.max(output) == imax + 2
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe I'm misunderstanding what was intended by this test, but if we specify imax = np.iinfo(np.int64).max and then add an offset to the new labels of offset=imax, numpy can't accurately distinguish between imax + x where x > 0. I've changed the test to use a "smaller" large number so the np.int64 doesn't overflow immediately within relabel_sequential.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This one needed to stay np.int64. I updated the logic within relabel_sequential in bb7238c of #6637

Comment on lines +33 to 34
rtol = 1e-2 if dtype == np.float32 else 6e-3
assert_allclose(dE2, data['dE'], rtol=rtol)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Were these relative tolerances set empirically or some other way? The tests fail for only the first entry in dE2 with numpy v1.24.0.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

pretty sure it is empirical (I think I added the 1e-2 for float32 at some point, but not sure about the original choice of 1e-4). I don't know why this particular function seems to have relatively low precision.

@stefanv
Copy link
Member

stefanv commented Nov 28, 2022

Thanks @bmorris3!

@grlee77 also proposed #6637 this morning

@bmorris3
Copy link
Contributor Author

Happy for #6637 to supersede this PR if that works!

@stefanv
Copy link
Member

stefanv commented Nov 28, 2022

I suspect we need both; will wait for @grlee77 to chime in.

@@ -909,7 +909,7 @@ cdef class Cascade:
feature_number = int(internal_nodes[0])
# list() is for Python3 fix here
lut_array = list(map(lambda x: int(x), internal_nodes[1:]))
lut = np.asarray(lut_array, dtype='uint32')
lut = np.array(lut_array).astype(np.uint32)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why is this change important?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

See #6638

I think perhaps, this suggestion is better than just ignoring the warning as I currently have in #6637. I think I may just update that one with a co-authored-by commit crediting @bmorris3 as this will continue to work even after the deprecation cycle completes

@grlee77
Copy link
Contributor

grlee77 commented Nov 29, 2022

Hi @bmorris3, my apologies for not noticing this prior to opening #6637. Otherwise, I would have just commented here instead of duplicating some of the work! I think the best solution will be to pull your Cascade solution from here and give co-author commits in #6637. Then we can close this issue.

@grlee77
Copy link
Contributor

grlee77 commented Nov 29, 2022

closing as this has been folded into #6637

@grlee77 grlee77 closed this Nov 29, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants