KellenSunderland commented on a change in pull request #13310: [MXNET-703] 
Update to TensorRT 5, ONNX IR 3. Fix inference bugs.
URL: https://github.com/apache/incubator-mxnet/pull/13310#discussion_r236058512
 
 

 ##########
 File path: src/operator/contrib/nnvm_to_onnx.cc
 ##########
 @@ -248,8 +259,12 @@ void ConvertPooling(NodeProto* node_proto, const 
NodeAttrs& attrs,
   AttributeProto* const pads = node_proto->add_attribute();
   pads->set_name("pads");
   pads->set_type(AttributeProto::INTS);
-  for (int kval : pad) {
-    pads->add_ints(static_cast<int64>(kval));
+
+  // Convert from MXNet symetric pads to ONNX non-symetric by running through 
padding twice.
+  for (int i =0; i < 2; i++) {
 
 Review comment:
   Yeah I want to have a conversation with them offline, I've already brought 
this up a few times.  The trick here is we need to export to ONNX from native 
code.  This is more or less 100% duplication (minus tests).  I'd be tempted to 
politely suggest we migrate all ONNX stuff to native code and core 
functionality so that for example I can export to ONNX, and java users running 
inference can import from ONNX.  I think this feature is important enough it 
shouldn't be python only.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to