sxjscience edited a comment on issue #18256:
URL: 
https://github.com/apache/incubator-mxnet/issues/18256#issuecomment-625050992


   @leezu Previously, we have one benchmark by @Jerryzcn : 
https://github.com/apache/incubator-mxnet/issues/17335 . According to my recent 
coding experience with the new DeepNumpy interface of MXNet, I do recognize 
that the usability of Gluon HybridBlock is worse than the PyTorch Module. When 
implementing some slightly more complicated structures like TransformerXL or 
Transformer for NMT, it's easy to write a PyTorch-based implementation that can 
fully utilize the GPUs. On the other hand, it is not so trivial to do so in 
MXNet. One observation is that the MXNet program will occupy lots of CPU cores. 
I'm still trying to figure out the cause.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


Reply via email to