[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-19 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r182657549
 
 

 ##
 File path: src/operator/quantization/quantize_graph_pass.cc
 ##
 @@ -198,7 +198,7 @@ Graph QuantizeGraph(Graph &) {
 NodePtr mirror_node = mirror_map.at(e.node.get());
 NodeEntry mirror_entry = NodeEntry{
   mirror_node, e.index, e.version};
-size_t num_outputs = e.node->num_outputs();
+size_t num_outputs = mirror_node->num_outputs() - 2;
 
 Review comment:
   when mkldnn is enabled, fp32 pooling will have two outputs, one is for 
workspace, so num_outputs cannot set by fp32 op node when mkldnn enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-19 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r182641222
 
 

 ##
 File path: src/c_api/c_api_symbolic.cc
 ##
 @@ -595,7 +597,12 @@ int MXQuantizeSymbol(SymbolHandle sym_handle,
 offline.emplace(offline_params[i]);
   }
   g.attrs["offline_params"] = std::make_shared(std::move(offline));
-  g = ApplyPass(std::move(g), "QuantizeGraph");
+#if MXNET_USE_MKLDNN == 1
+  if (dev_type == Context::kCPU && dev_id == 0)
 
 Review comment:
   I will remove dev_id check here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-19 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r182640860
 
 

 ##
 File path: src/operator/quantization/quantized_conv.cc
 ##
 @@ -86,12 +89,10 @@ bool QuantizedConvType(const nnvm::NodeAttrs& attrs,
   const ConvolutionParam& param = nnvm::get(attrs.parsed);
   CHECK_EQ(in_type->size(), param.no_bias? 6U : 9U);
   CHECK_EQ(out_type->size(), 3U);
-  TYPE_ASSIGN_CHECK(*in_type, 0, mshadow::kInt8);
 
 Review comment:
   Since mkldnn support u8 input data type while cudnn use s8, so we move input 
data type check into FComputeEx.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-18 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r182639741
 
 

 ##
 File path: src/operator/quantization/quantize_graph_pass.cc
 ##
 @@ -198,7 +198,7 @@ Graph QuantizeGraph(Graph &) {
 NodePtr mirror_node = mirror_map.at(e.node.get());
 NodeEntry mirror_entry = NodeEntry{
   mirror_node, e.index, e.version};
-size_t num_outputs = e.node->num_outputs();
+size_t num_outputs = mirror_node->num_outputs() - 2;
 
 Review comment:
   Yes, I will fix it back for QuantizeGraph pass. Besides, we have added 
QuantizeGraphUnsigned pass for mkldnn since mkldnn support u8 input data type. 
So output data type for quantize op will be set u8 in QuantizeGraphUnsigned 
pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-06 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r179894359
 
 

 ##
 File path: src/operator/nn/pooling.cc
 ##
 @@ -368,7 +368,11 @@ height, width)*.
 })
 .set_attr("FListOutputNames",
 [](const NodeAttrs& attrs) {
-  return std::vector{"output"};
+  const PoolingParam  = nnvm::get(attrs.parsed);
+  if (GetNumOutputs(param) == 2)
+return std::vector{"output", "workspace"};
+  else
+return std::vector{"output"};
 
 Review comment:
   without this fix, this PR will have segfault when run launch_quantize.sh


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services