You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I observed a memory leak in the tnet demo. The memory leak only occurs when using the torchfire loss. This was observed on both the original version and the parallelized version of the code.
Though I haven't been able to track down the issue yet, I wonder if it is possible that some of the Firedrake variables are not being cleaned up when calling solve_firedrake (line 76 of tnet_heat_equation.py, branch jon/parallelize_demos). I tried commenting out all the physics code in fd_to_torch, both the forward and backward functions, and the issue was greatly reduced. There was still a very small amount of memory growth without the physics calls, but much less than with them.
The text was updated successfully, but these errors were encountered:
Plausibly an issue with firedrake-adjoint? firedrakeproject/firedrake#2866
A suggestion is to implement a custom pyadjoint.block to enable re-using the adjoint solver.
I observed a memory leak in the tnet demo. The memory leak only occurs when using the torchfire loss. This was observed on both the original version and the parallelized version of the code.
Though I haven't been able to track down the issue yet, I wonder if it is possible that some of the Firedrake variables are not being cleaned up when calling
solve_firedrake
(line 76 of tnet_heat_equation.py, branch jon/parallelize_demos). I tried commenting out all the physics code infd_to_torch
, both theforward
andbackward
functions, and the issue was greatly reduced. There was still a very small amount of memory growth without the physics calls, but much less than with them.The text was updated successfully, but these errors were encountered: