matteosal opened a new issue, #21070:
URL: https://github.com/apache/incubator-mxnet/issues/21070

   This script repeatedly runs a large convolution operator on the same input 
and checks memory allocation:
   
   ```
   import mxnet as mx
   import os, psutil
   
   def get_memory_usage():
        return psutil.Process(os.getpid()).memory_info().rss / 1e+6
   
   sym = mx.sym.Convolution(
        mx.sym.Variable('in'), 
        mx.sym.Variable('w'), 
        mx.sym.Variable('b'),
        kernel=(20, 20), 
        num_filter=1
   )
   
   inputs = {
        'in': mx.nd.ones([1, 3, 500, 500]), 
        'w': mx.nd.ones([1, 3, 20, 20]),
        'b': mx.nd.ones([1])
   }
   cached_op = mx.ndarray.CachedOp(sym)
   
   print('Initial memory: ' + str(get_memory_usage()))
   for i in range(10):
        cached_op(*inputs.values(), default_ctx=mx.cpu())
        mx.ndarray.waitall()
        print(get_memory_usage())
   ```
   
   This is what I'm getting:
   ```
   Initial memory: 188.06784
   1306.4192
   2416.996352
   3527.53664
   4638.076928
   4638.076928
   4638.076928
   4638.076928
   4638.076928
   4638.076928
   4638.076928
   ```
   
   Memory allocation climbs greatly in the first few evaluations, then stops. I 
would naively expect it to stop increasing after the first evaluation. Why does 
this happen? Is there a way to control this behaviour?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]


---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to