Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent none value in gradients when some of the inputs have not impact to the target #987

Open
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

badcount
Copy link

@badcount badcount commented Dec 5, 2023

This is to fix the gradients which could have none values. There are scenarios that some of the inputs have not impact to the target value, IG generates None type grads, something like [ [gradient 1], [gradient 2], None, None], and then when calculating the sum in the _calculate_sum_int method, "grads = tf.concat(batches[j], 0)" line throws errors like "None type can't convert to tensor".

Here is the full stack of errors:

An error was encountered:
Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.
Traceback (most recent call last):
File "/tmp/5948054080360197755", line 229, in execute
exec(code, global_dict)
File "", line 8, in
baselines=None)
File "/usr/local/lib/python3.7/dist-packages/alibi/explainers/integrated_gradients.py", line 828, in explain
attribute_to_layer_inputs)
File "/usr/local/lib/python3.7/dist-packages/alibi/explainers/integrated_gradients.py", line 1069, in _compute_attributions_list_input
step_sizes, j)
File "/usr/local/lib/python3.7/dist-packages/alibi/explainers/integrated_gradients.py", line 614, in _calculate_sum_int
grads = tf.concat(batches[j], 0)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/util/dispatch.py", line 206, in wrapper
return target(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/array_ops.py", line 1769, in concat
return gen_array_ops.concat_v2(values=values, axis=axis, name=name)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1218, in concat_v2
values, axis, name=name, ctx=_ctx)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/ops/gen_array_ops.py", line 1248, in concat_v2_eager_fallback
_attr_T, values = _execute.args_to_matching_eager(list(values), ctx, [])
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/eager/execute.py", line 274, in args_to_matching_eager
t, dtype, preferred_dtype=default_dtype, ctx=ctx)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/profiler/trace.py", line 163, in wrapped
return func(*args, **kwargs)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/ops.py", line 1566, in convert_to_tensor
ret = conversion_func(value, dtype=dtype, name=name, as_ref=as_ref)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py", line 346, in _constant_tensor_conversion_function
return constant(v, dtype=dtype, name=name)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py", line 272, in constant
allow_broadcast=True)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py", line 283, in _constant_impl
return _constant_eager_impl(ctx, value, dtype, shape, verify_shape)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py", line 308, in _constant_eager_impl
t = convert_to_eager_tensor(value, ctx, dtype)
File "/usr/local/lib/python3.7/dist-packages/tensorflow/python/framework/constant_op.py", line 106, in convert_to_eager_tensor
return ops.EagerTensor(value, ctx.device_name, dtype)
ValueError: Attempt to convert a value (None) with an unsupported type (<class 'NoneType'>) to a Tensor.

Copy link

@HughChen HughChen left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checking on the rationale for setting None gradients to zero and checking if it is needed for a non-list input x.

shape = x[0].shape
else:
shape = x.shape
for idx, grad in enumerate(grads):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If our input x is not a list, I think tape.gradient may directly output the gradient for x, in which case we may not want to have this enumerate step which seems to assume that the grads is a list of gradient tensors (one for each input).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

And actually, if our input x isn't a list, would we encounter the None gradients?

It seems like this it primarily comes up for us because we have outputs y1, y2, y3 which depend on different subsets of inputs x1, x2, x3. If y1 only depends on x1, then if we try to explain the model, we can run into issues because the gradients for x2 and x3 will be none.

But if the input isn't a list and is just x, then it seems like every output would need to depend on the whole input tensor?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If that is the case, maybe we can only do this gradient zero-ing for when x is a list? Something like:

if isinstance(x, list):
    for idx, grad in enumerate(grads):
        if grad is None:
            grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype)

shape = x.shape
for idx, grad in enumerate(grads):
if grad is None:
grads[idx] = tf.convert_to_tensor(np.zeros(shape), dtype=x[idx].dtype)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think in an earlier commit you had x[idx].shape, which seems to make more sense in case each input has a different shape.

@@ -400,6 +400,14 @@ def _gradients_input(model: Union[tf.keras.models.Model],

grads = tape.gradient(preds, x)

# if there are inputs have not impact to the output, the gradient is None, but we need to return a tensor
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Slight nit: Maybe "If certain inputs don't impact the target, the gradient is None, but we need to return a tensor"

@badcount
Copy link
Author

badcount commented Dec 6, 2023

Thanks so much for the input @HughChen , updated the PR

@CLAassistant
Copy link

CLA assistant check
Thank you for your submission! We really appreciate it. Like many open source projects, we ask that you sign our Contributor License Agreement before we can accept your contribution.
You have signed the CLA already but the status is still pending? Let us recheck it.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants