roggiezhang-nv commented on issue #8494: Autograd bug in mxnet-cu80: 0.12
URL: 
https://github.com/apache/incubator-mxnet/issues/8494#issuecomment-340989454
 
 
   OK. Does that more make sense if it returns with all ones from a user point. 
   
   By the way, this is not the true problem I want to emphasize. I just found 
0.12 behaves a lot differently in autograd as 0.11. I have a piece of code that 
has been broken in 0.12 while it's working in 0.11, here is the snippet of what 
it's doing:
   
   ```
   a is a mx.nd.array
   a.attach_grad()
   loss = 0
   with autograd.record():
       for each target:
             loss = loss + LOSS_FUNCTION(a, target)
   
   loss.backward()
   Update gradient with a.grad
   ```
   In 0.11, this works without any issues. But after switching to 0.12, this is 
broken with a.grad equals to zero. 
   
   So I just want to understand any important changes in 0.12 about autograd. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to