New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OUTDATED!, Autograd] Cond Higher-Order Operation #126007
base: main
Are you sure you want to change the base?
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/126007
Note: Links to docs will display an error until the docs builds have been completed. ❌ 40 New Failures, 5 Unrelated FailuresAs of commit b5a2b34 with merge base ee804d2 (): NEW FAILURES - The following jobs have failed:
FLAKY - The following jobs failed but were likely due to flakiness present on trunk:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Overall looks good. Should probably fix existing test failures first and add more tests for the following cases:
-
More scenerios for autograd beyond a simple function: e.g. 1. map + cond, 2. cond + nn modules (with parameters and buffers), closures, 3. nested cond autograd and others (can use your imagination to come up with more).
-
Tracing the forward and backward graph with make_fx (with different tracing mode) for cases listed in a, also use expectedInline tests to make sure the produced forward and backward graphs are correct (we have examples for map).
@@ -57,6 +57,8 @@ if(CMAKE_VERSION VERSION_GREATER_EQUAL 3.12.0) | |||
endif() | |||
|
|||
find_package(CUDAToolkit REQUIRED) | |||
add_library(CUDA::nvToolsExt INTERFACE IMPORTED) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why do we want to change this?
from torch._higher_order_ops.map import ( # noqa: F401 | ||
_stack_pytree, | ||
_unstack_pytree, | ||
map, | ||
) | ||
from torch._higher_order_ops.cond import ( # noqa: F401 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this change necessary?
result = cond(pred, true_fn, false_fn, [x]) | ||
self.assertEqual(result, torch.cos(x)) | ||
|
||
grad_out = torch.ones_like(result) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you also use "assertExpectedInline" to show the forward and backward graph? Similar as what we did for map.
def test_cond_make_fx_preserve_stack_trace_for_nodes_in_subgraph(self): | ||
def true_fn(x): | ||
return x + x.cos() | ||
# def test_cond_make_fx_preserve_stack_trace_for_nodes_in_subgraph(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does this test starts to fail because of the change? Why is that?
@@ -1912,7 +1912,8 @@ def fn(model: Callable): | |||
|
|||
from torch import export as export | |||
|
|||
from torch._higher_order_ops import cond | |||
# from torch._higher_order_ops import cond |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
what happens here?
num_mapped_args = len(operands) | ||
|
||
unwrapped_mapped_operands = pytree.tree_map(_from_fun, operands) | ||
# example_operands = _unstack_pytree(unwrapped_mapped_operands)[0] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I don't think we need _unstack_pytree because for cond, the operands are already flattened by dynamo (i.e. becomes a sequence of tensors) by the time we entering autograd key of cond.
fw_true_graph = make_fx(true_fn)(*example_operands) | ||
fw_false_graph = make_fx(false_fn)(*example_operands) | ||
|
||
def joint_f(fn, *joint_mapped_args): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't look too carefully into the implementation of these. Can you add more tests to make check whether the true graph and false graph creates correct backward graph?
@@ -256,6 +256,23 @@ def false_fn(x): | |||
pred = torch.tensor(False, device="cuda") | |||
result = cond(pred, true_fn, false_fn, [x]) | |||
self.assertEqual(result, torch.cos(x)) | |||
|
|||
def test_cond_autograd_simple(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we also add opinfo tests for cond here https://github.com/pytorch/pytorch/blob/main/torch/testing/_internal/hop_db.py? Following the map autograd tests.
This PR is intended to provide the Autograd mechanism for the
cond
operation.@ydwu4