KellenSunderland commented on a change in pull request #13310: [MXNET-703] 
Update to TensorRT 5, ONNX IR 3. Fix inference bugs.
URL: https://github.com/apache/incubator-mxnet/pull/13310#discussion_r236058541
 
 

 ##########
 File path: src/operator/contrib/nnvm_to_onnx.cc
 ##########
 @@ -382,31 +387,16 @@ void ConvertBatchNorm(NodeProto* node_proto, const 
NodeAttrs& attrs,
   AttributeProto* const spatial = node_proto->add_attribute();
   spatial->set_name("spatial");
   spatial->set_type(AttributeProto::INT);
-  spatial->set_i(1);
-
-  AttributeProto* const consumed = node_proto->add_attribute();
-  consumed->set_name("consumed_inputs");
-  consumed->set_type(AttributeProto::INTS);
-
-  for (int i = 0; i < 5; i++) {
-    int val = (i < 3) ? 0 : 1;
-    consumed->add_ints(static_cast<int64>(val));
-  }
+  // MXNet computes mean and variance per feature for batchnorm.  Enabling 
spatial mode
+  // (default in ONNX3) implies running batchnorm on all spatial features so 
we need to explicitly
+  // disable this for MXNet's BatchNorm.
+  spatial->set_i(0);
 
 Review comment:
   Good call out, but the test that has been added will cover this.  Batchnorms 
are in resnet and the math would be way off if the default values didn't align.
   
   One of the reasons I converted to ONNX 3 opset 8 was that it seemed to align 
well with MXNet's defaults which means we were able to reduce code in more 
places, but there were still a few customizations like this one required.

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to