-
-
Notifications
You must be signed in to change notification settings - Fork 5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] fix some dimensionality inconsistencies #11147
Conversation
@@ -185,6 +185,8 @@ def fun_non_numpy(self, x): | |||
return math.exp(x) | |||
|
|||
def jac_non_numpy(self, x): | |||
# Input can be float x or numpy.array([x]). Cast it all into numpy.array(x). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, this feels like a bug in the caller - it really ought to be certain what shape it calls this with.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is from x0 in scipy/optimize/_numdiff.py. The doc says
x0 : array_like of shape (n,) or float
Point at which to estimate the derivatives. Float will be converted
to a 1-d array.
The Float will be converted to a 1-d array.
is weird.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Honestly, it feels like what should actually happen is something like:
x0_flat = np.asarray(x0).ravel()
x0_revised = do_linalgy_stuff_that_needs_x_1d(x0_flat)
f(x0_revised.reshape(x0.shape))
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll perhaps try a patch for that in the next few weeks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The purpose behind that docstring is to let the user know that f
is going to be passed an array regardless of whether x
is an array or a float.
Hmm, this PR is touching a lot of files and it's currently not clear to me that all the changes are required. Is back compatibility being conserved for all the changes (assuming there are no bugs), have tests been added to that effect? Because it touches so many files I don't think it's going to be possible to give a deep enough review of all the changes. Can you give a more graded approach, with each specific set of code files being addressed one by one, and with necessary tests added. In this situation it's better to write the tests first so a real issue is demonstrated, then with a fix applied. Let's take the differential_evolution code as an example. For the changes you made I'd like to see a test that has the objective function either returning a float, or an array with shape (1,). Only if the test shows an issue would it be time to do something.
Here, there don't seem to be any issues, and it's not worth the code churn. |
I would argue it is a bug that the second output does not say |
@@ -56,7 +56,8 @@ class StructTest2(StructTestFunction): | |||
""" | |||
|
|||
def f(self, x): | |||
return (x - 30) * numpy.sin(x) | |||
# f must return a scalar | |||
return (x[0] - 30) * numpy.sin(x[0]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's no point in making these kinds of changes. That is unless there is a test showing that the minimisation goes wrong with before vs after.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One cannot optimize functions that return vectors. The change makes the expression mathematically sound and relieves numpy of the burden of having to convert your vector into a scalar in the background (which is what's happening now).
The tests perhaps should not be changed (apart from adding new ones), if we aim to preserve the API. They are probably representative of the user code that exists out there, and we'd like to keep that working.
…On December 1, 2019 2:40:12 PM UTC, Andrew Nelson ***@***.***> wrote:
andyfaff commented on this pull request.
> @@ -177,17 +178,21 @@ def test_bfgs(self):
def test_bfgs_infinite(self):
# Test corner case where -Inf is the minimum. See gh-2019.
- func = lambda x: -np.e**-x
Not worth making these changes unless it can be shown that before/after
doesnt't/does work. The churn isn't worth it for a change that doesn't
look better anyway.
|
Actually, I've changed my mind here - So the result of the second case is ok, but should be accompanied by a warning that the result is not a scalar. |
👍
It makes no sense to provide a vector-returning function at an optimization method. Optimization can only happen on scalars. If your function returns a vector, then probably something is wrong. It's a good thing to have a warning about that. |
So far in numpy, size-1 arrays have been duck-scalars by definition in many contexts. Whether it makes sense or not does not matter in this context --- this has been how it has worked for 15 years, and we want to be sure the changes in this PR are not accidentally breaking the API compatibility. |
Okay, let me break up some of the smaller changes. |
Some of these fixes are needed for the upcoming change in numpy that will warn when creating an object array from |
When working on numpy/numpy#10615, I noticed that scipy is in part inconsistent with whether it requires
float
s or arrays which may only contain one element. The work in numpy had me discover some bugs, e.g., arrays ofobj
type which look likewhich really wants to be float array
I've fixed a bunch of those here.
Comments requested.