ArmageddonKnight commented on issue #14973: [MXNET-1404] Added the GPU memory 
profiler
URL: https://github.com/apache/incubator-mxnet/pull/14973#issuecomment-496024566
 
 
   @szha The problem with this proposed approach is that it might work well for 
pure imperative programming paradigm, but **NOT** for symbolic graphs or Gluon.
   
   The reason is because in those approaches memory allocations do not happen 
immediately. They happen only when the computation graphs are materialized. 
Consider the same piece of code, but with the symbolic graph approach:
   
   ```Python
   def function1(sym1, sym2):
       with mx.profiler.scope('function1'):
           r1 = mx.sym.op1(sym1)
           r2 = mx.sym.op1(sym2)
           r3 = mx.sym.op2(r1, r2)
           return r3
   ```
   
   Because `mx.sym` only builds up the graph but does not do actual memory 
allocations or compute, by the time when the actual GPU memory allocations 
happen (i.e., when the computation graphs are materialized using `bind()` or 
`simple_bind()`), the profiler scope information is already lost and I do not 
think there is a very good way of recovering such information.
   
   Besides, with your proposed approach, all the existing MXNet implementations 
need to be modified with the `mx.profiler.scope('...')` injected if they want 
to use the memory profiler. Given that frontend programmers always need to 
provide extra annotations, why not put them all inside one file rather than 
letting them scattering around in the source code?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to