[GitHub] roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12

2017-11-03 Thread GitBox
roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12
URL: 
https://github.com/apache/incubator-mxnet/issues/8494#issuecomment-341635996
 
 
   It does not need to adjust the weights of vgg since I only used that for 
feature extraction. The code logic is OK since I already made it work for 
0.11.3, but not for 0.12. That's why I started this thread. I cannot find 
anything useful from source code of python api. And I debugged and found the 
difference after [executing 
graph](https://github.com/apache/incubator-mxnet/blob/396943e22661f03867d103d134416541e7e4f2bb/python/mxnet/gluon/block.py#L394).
 For 0.12, the grad that I attached disappears after this line, which is not 
for 0.11.3.
   
   By the way, here is the rough logic of my code:
   
   1. Build pre-trained network vgg.
   2. Get inputs and outputs symbols that are interested.
   3. Use those inputs and outputs to build SymbolBlock.
   4. Open autograd.record(), attach the grad for input, and feed the input to 
the SymbolBlock created above. Lastly, compute the loss and do grad. 
   
   In theory, I should get a correct grad for input (that's how it works in 
0.11.3), but it's broken in 0.12. Since it's related to native code, i'd have 
to ask for you guys to help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12

2017-11-03 Thread GitBox
roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12
URL: 
https://github.com/apache/incubator-mxnet/issues/8494#issuecomment-341635996
 
 
   It does not need to adjust the weights of vgg since I only used that for 
feature extraction. The code logic is OK since I already made it work for 0.12, 
but not for 0.11.3. That's why I started this thread. I cannot find anything 
useful from source code of python api. And I debugged and found the difference 
after [executing 
graph](https://github.com/apache/incubator-mxnet/blob/396943e22661f03867d103d134416541e7e4f2bb/python/mxnet/gluon/block.py#L394).
 For 0.12, the grad that I attached disappears after this line, which is not 
for 0.11.3.
   
   By the way, here is the rough logic of my code:
   
   1. Build pre-trained network vgg.
   2. Get inputs and outputs symbols that are interested.
   3. Use those inputs and outputs to build SymbolBlock.
   4. Open autograd.record(), attach the grad for input, and feed the input to 
the SymbolBlock created above. Lastly, compute the loss and do grad. 
   
   In theory, I should get a correct grad for input (that's how it works in 
0.11.3), but it's broken in 0.12. Since it's related to native code, i'd have 
to ask for you guys to help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12

2017-11-03 Thread GitBox
roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12
URL: 
https://github.com/apache/incubator-mxnet/issues/8494#issuecomment-341635996
 
 
   It does not need to adjust the weights of vgg since I only used that for 
feature extraction. The code logic is OK since I already made it work for 0.12, 
but not for 0.11.3. That's why I started this thread. I cannot find anything 
useful from source code of python api. And I debugged and found the difference 
after executing graph 
[https://github.com/apache/incubator-mxnet/blob/396943e22661f03867d103d134416541e7e4f2bb/python/mxnet/gluon/block.py#L394](line).
 For 0.12, the grad that I attached disappears after this line, which is not 
for 0.11.3.
   
   By the way, here is the rough logic of my code:
   
   1. Build pre-trained network vgg.
   2. Get inputs and outputs symbols that are interested.
   3. Use those inputs and outputs to build SymbolBlock.
   4. Open autograd.record(), attach the grad for input, and feed the input to 
the SymbolBlock created above. Lastly, compute the loss and do grad. 
   
   In theory, I should get a correct grad for input (that's how it works in 
0.11.3), but it's broken in 0.12. Since it's related to native code, i'd have 
to ask for you guys to help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12

2017-11-03 Thread GitBox
roggiezhang commented on issue #8494: Autograd bug in mxnet-cu80: 0.12
URL: 
https://github.com/apache/incubator-mxnet/issues/8494#issuecomment-341629698
 
 
   Anybody could help here? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services