I believe the `MXNET_GPU_MEM_POOL_RESERVE` environment variable is just a hint 
to `MXNet` to release all ever allocated but now **FREED** gpu memories, which 
is served as possible reuse in the future.  If your network is large (for 
example, the gpu cannot run two such networks  simultaneously), one way to 
handle this is to implement the `memory sharing` between shallow layers and 
deeper layers' calculation, or more aggressive `inplace` operators, either of 
these two methods could involve some workload to the DL frameworks.

BTW, have you tried the network using `TF` with same GPU memory budget without 
an `OOM` error raised ? As far as I know, the MXNet already have a good 
implementation to use less gpu memory.

And, the `GPU Load` means the calculation ability (for example, the cuda cores) 
used by current application, but not memory used by 81 %  in my opinion, where 
higher means better use of GPU.





---
[Visit Topic](https://discuss.mxnet.io/t/how-to-limit-gpu-memory-usage/6304/4) 
or reply to this email to respond.

You are receiving this because you enabled mailing list mode.

To unsubscribe from these emails, [click 
here](https://discuss.mxnet.io/email/unsubscribe/bbb47aa1f63b1848f22361e42d3ef3c9e0e33f5cd6123711b19c9b4d0cb3bb11).

Reply via email to