Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GPU memory leak in adahessian optimizer? #358

Open
sjscotti opened this issue Sep 8, 2021 · 3 comments
Open

GPU memory leak in adahessian optimizer? #358

sjscotti opened this issue Sep 8, 2021 · 3 comments

Comments

@sjscotti
Copy link

sjscotti commented Sep 8, 2021

Hi
I am using your library and appreciate all the work you have put into this capability. I started using the adahessian optimizer and found that my GPU memory would increase until it used all my GPU memory and the run crashed as the optimizer operated. The leak seems to be within the get_trace routine and I believe it is can be fixed by changing

      hvs = torch.autograd.grad(
            grads, params, grad_outputs=v, only_inputs=True, retain_graph=True
        )

to

      hvs = torch.autograd.grad(
            grads, params, grad_outputs=v, only_inputs=True, retain_graph=False
        )

If you get a chance to check this out, please comment to let me know.
Thanks!

@jettify
Copy link
Owner

jettify commented Oct 2, 2021

@sjscotti Would you like to submit PR with proposed fix? Could be counted as part of hactoberfest.

@sjscotti
Copy link
Author

sjscotti commented Oct 2, 2021

Thanks for the suggestion, but I don't have experience in submitting pull requests. I did give it a try but I was stuck at the first step (comparing branches).
BTW, you might also add a @torch.no_grad() decorator before each routine in adahessian. I saw that done for some other implementations of adahessian (and there may be other optimizers in your library that could also use this decorator).

@jettify
Copy link
Owner

jettify commented Oct 8, 2021

Yep i have plan to add @torch.no_grad(), hopefully will find time do to this soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants