gilbertfrancois commented on issue #10002:
URL: 
https://github.com/apache/incubator-mxnet/issues/10002#issuecomment-708962453


   What is the current status of support for second order derivatives in Gluon? 
I tried implementing the method from the paper [Improved Training of 
Wasserstein GANs](https://arxiv.org/pdf/1704.00028.pdf), but the training 
program returns an error when I add the gradient penalty to the loss function 
and do a back propagation. I noticed that, with mxnet version 1.7, it works for 
Dense layers without activation, but e.g. Conv2D and many other layers seem 
still unsupported. I saw a similar question here #5982, but that was around 3 
years ago.
   
   Are there plans to add second order derivative support for e.g. 
gluon.nn.Conv2D, gluon.nn.BatchNorm, gluon.nn.Activation, gluon.nn.LeakyReLU?


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]



---------------------------------------------------------------------
To unsubscribe, e-mail: [email protected]
For additional commands, e-mail: [email protected]

Reply via email to