[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234414
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -48,16 +48,18 @@ enum UpSamplingMultiInputMode {kConcat, kSum};
 }  // namespace up_enum
 
 struct UpSamplingParam : public dmlc::Parameter {
-  int scale;
+  TShape scale;
   int num_filter;
   int sample_type;
   int num_args;
   int multi_input_mode;
   uint64_t workspace;
   DMLC_DECLARE_PARAMETER(UpSamplingParam) {
 DMLC_DECLARE_FIELD(scale)
-.set_range(1, 1000)
-.describe("Up sampling scale");
+.set_default(TShape())
 
 Review comment:
   Yep, made changes. Not sure if there's any historical or significant reason 
for the 1000 range limit though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234175
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -58,17 +61,19 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
 }
   } else {
 CHECK_EQ(in_shape->size(), 2U) << "Input:[data, weight]";
+CHECK_EQ(scale_h, scale_w) <<
+"UpSamplingBilinear: Scale should be the same along all dimensions for 
bilinear upsampling";
 CHECK_EQ(dshape.ndim(), 4U) << \
   "UpSamplingBilinear: Input data should be 4D in (batch, channel, y, x)";
 if (!shape_is_known(dshape)) return false;
-int kernel = 2 * param_.scale - param_.scale % 2;
+int kernel = static_cast(2.0 * scale_h - ::fmod(scale_h, 2));
 
 Review comment:
   Didn't think of that--thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234331
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1597,16 +1605,29 @@ def _init_bilinear(arr, f):
 assert out.shape == data_shape[:2] + target_shape
 
 
+"""
+The test cases include integer, tuple, 
+and empty tuple scales on up to 3 shapes 
+at once with the shapes having various sizes 
+for their heights and widths
+"""
 @with_seed()
 def test_nearest_upsampling():
-for root_scale in [1,2,3]:
-for scale in [1,2,3]:
-for num_shape in [1,2,3]:
-for base in [1,2,3]:
-shapes = 
[(1,3,base*root_scale*scale**(num_shape-1-i),base*root_scale*scale**(num_shape-1-i))
 for i in range(num_shape)]
+for root_scale in [1, 2, (3), (2,3), (3,2), (1,1), (5,1), (2,2), ()]:
 
 Review comment:
   Cool! Made changes.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318234414
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -48,16 +48,18 @@ enum UpSamplingMultiInputMode {kConcat, kSum};
 }  // namespace up_enum
 
 struct UpSamplingParam : public dmlc::Parameter {
-  int scale;
+  TShape scale;
   int num_filter;
   int sample_type;
   int num_args;
   int multi_input_mode;
   uint64_t workspace;
   DMLC_DECLARE_PARAMETER(UpSamplingParam) {
 DMLC_DECLARE_FIELD(scale)
-.set_range(1, 1000)
-.describe("Up sampling scale");
+.set_default(TShape())
 
 Review comment:
   Yep, made changes. Not sure if there's any historical or significant reason 
for the range limits though.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318215704
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1597,16 +1605,29 @@ def _init_bilinear(arr, f):
 assert out.shape == data_shape[:2] + target_shape
 
 
+"""
+The test cases include integer, tuple, 
+and empty tuple scales on up to 3 shapes 
+at once with the shapes having various sizes 
+for their heights and widths
+"""
 @with_seed()
 def test_nearest_upsampling():
-for root_scale in [1,2,3]:
-for scale in [1,2,3]:
-for num_shape in [1,2,3]:
-for base in [1,2,3]:
-shapes = 
[(1,3,base*root_scale*scale**(num_shape-1-i),base*root_scale*scale**(num_shape-1-i))
 for i in range(num_shape)]
+for root_scale in [1, 2, (3), (2,3), (3,2), (1,1), (5,1), (2,2), ()]:
+for scale in [1, 2, 3]:
+for num_shape in [1, 2, 3]:
+for base in [1, 2, 3]:
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+shapes = [(1, 3, base*root_h*scale**(num_shape-1-i), 
base*root_w*scale**(num_shape-1-i)) for i in range(num_shape)]
 check_nearest_upsampling_with_shape(shapes, scale, 
root_scale)
 
-
 
 Review comment:
   I see, changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318215183
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1559,15 +1559,23 @@ def check_deconvolution_forward_with_bias(shape=(1, 
16, 5, 5), num_filter=32, nu
 def check_nearest_upsampling_with_shape(shapes, scale, root_scale):
 arr = {'arg_%d'%i: mx.random.uniform(-10.0, 10.0, shape, 
ctx=mx.cpu()).copyto(default_context()) for i, shape in zip(range(len(shapes)), 
shapes)}
 arr_grad = {'arg_%d'%i: mx.nd.zeros(shape) for i, shape in 
zip(range(len(shapes)), shapes)}
-
 up = mx.sym.UpSampling(*[mx.sym.Variable('arg_%d'%i) for i in 
range(len(shapes))], sample_type='nearest', scale=root_scale)
 exe = up.bind(default_context(), args=arr, args_grad=arr_grad)
 exe.forward(is_train=True)
 exe.backward(exe.outputs)
 for k in range(len(shapes)):
 name = 'arg_%d'%k
-assert_allclose(arr[name].asnumpy()*root_scale**2*scale**(2*k), 
arr_grad[name].asnumpy(), rtol=1e-4)
-
+out = arr_grad[name].asnumpy()
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+exp = arr[name].asnumpy()*root_h*root_w*scale**(2*k)
 
 Review comment:
   Sure, changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318214363
 
 

 ##
 File path: src/operator/nn/upsampling.cc
 ##
 @@ -37,13 +37,16 @@ static bool UpSamplingShape(const nnvm::NodeAttrs& attrs,
   CHECK_GE(in_shape->size(), 1U);
   const mxnet::TShape  = (*in_shape)[0];
   mxnet::TShape oshape = dshape;
+  mxnet::TShape scale_hw = scaleComp(param_);
+  int scale_h = scale_hw[0];
+  int scale_w = scale_hw[1];
   if (param_.sample_type == up_enum::kNearest) {
 CHECK_EQ(in_shape->size(), static_cast(param_.num_args));
 oshape[1] = 0;
 for (auto& shape : *in_shape) {
   CHECK_EQ(shape.ndim(), 4U) << \
 "UpSamplingNearest: Input data should be 4D in (batch, channel, y, x)";
-  int oh = dshape[2]*param_.scale, ow = dshape[3]*param_.scale;
+  int oh = dshape[2]*scale_h, ow = dshape[3]*scale_w;
 
 Review comment:
   Changes made.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-27 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r318213793
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,21 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline std::vector scaleComp(const UpSamplingParam ) {
 
 Review comment:
   Changing back to vector after another discussion with @apeforest.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-26 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317880770
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,21 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline std::vector scaleComp(const UpSamplingParam ) {
 
 Review comment:
   Okay! Made the change.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-23 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317331798
 
 

 ##
 File path: tests/python/unittest/test_operator.py
 ##
 @@ -1599,14 +1607,21 @@ def _init_bilinear(arr, f):
 
 @with_seed()
 def test_nearest_upsampling():
-for root_scale in [1,2,3]:
-for scale in [1,2,3]:
-for num_shape in [1,2,3]:
-for base in [1,2,3]:
-shapes = 
[(1,3,base*root_scale*scale**(num_shape-1-i),base*root_scale*scale**(num_shape-1-i))
 for i in range(num_shape)]
+for root_scale in [1, 2, (2,3), (3,2), (1,1), (5,1), (2,2), ()]:
+for scale in [1, 2, 3]:
+for num_shape in [1, 2, 3]:
+for base in [1, 2, 3]:
+root_h = root_w = 1
+if type(root_scale) is int:
+root_h = root_w = root_scale
+elif len(root_scale) == 1:
+root_h = root_w = root_scale[0]
+elif len(root_scale) >= 2:
+root_h = root_scale[0]
+root_w = root_scale[1]
+shapes = [(1, 3, base*root_h*scale**(num_shape-1-i), 
base*root_w*scale**(num_shape-1-i)) for i in range(num_shape)]
 
 Review comment:
   The test cases include integer, tuple, and empty tuple scales on up to 3 
shapes at once with the shapes having various sizes for their heights and widths


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-23 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317327754
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,21 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline std::vector scaleComp(const UpSamplingParam ) {
+  std::vector scaleArr{ 1, 1 };
+  if (param.scale.ndim() == 1) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[0];
+  } else if (param.scale.ndim() == 2) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[1];
+  } else if (param.scale.ndim() == 4) {
+scaleArr[0] = param.scale[2];
+scaleArr[1] = param.scale[3];
+  }
 
 Review comment:
   I believe when a single scalar is passed, it is a tuple of ndim == 1 with 
the scalar value in param.scale[0]


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-23 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317305547
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -48,16 +48,18 @@ enum UpSamplingMultiInputMode {kConcat, kSum};
 }  // namespace up_enum
 
 struct UpSamplingParam : public dmlc::Parameter {
-  int scale;
+  TShape scale;
   int num_filter;
   int sample_type;
   int num_args;
   int multi_input_mode;
   uint64_t workspace;
   DMLC_DECLARE_PARAMETER(UpSamplingParam) {
 DMLC_DECLARE_FIELD(scale)
-.set_range(1, 1000)
-.describe("Up sampling scale");
+.set_default(TShape())
 
 Review comment:
   I believe it is still backwards compatible, as it supports a scale as either 
an integer or a tuple.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-23 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r317305265
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,23 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline int* scaleComp(const UpSamplingParam ) {
 
 Review comment:
   Good idea--will make changes


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-19 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r315439884
 
 

 ##
 File path: python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
 ##
 @@ -2047,3 +2047,29 @@ def convert_broadcast_to(node, **kwargs):
 )
 
 return [tensor_node, expand_node]
+
+@mx_op.register("UpSampling")
+def convert_upsample(node, **kwargs):
+"""Map MXNet's UpSampling operator attributes to onnx's Upsample operator
+and return the created node.
+"""
+name, input_nodes, attrs = get_inputs(node, kwargs)
+
+sample_type = attrs.get('sample_type', 'nearest')
+sample_type = 'linear' if sample_type == 'bilinear' else sample_type
+scale = convert_string_to_list(attrs.get('scale'))
+scaleh = float(scale[0])
+scalew = float(scale[0])
+if len(scale) > 1:
+scalew = float(scale[1])
 
 Review comment:
   Assuming I'm not misunderstanding the question, I don't believe so. The 
checks here are just to correctly pass in the scales to ONNX's upsampling 
operator.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-19 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r315438205
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,23 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline int* scaleComp(const UpSamplingParam ) {
+  int* scaleArr = new int[2];
+  scaleArr[0] = 1;
+  scaleArr[1] = 1;
+  if (param.scale.ndim() == 1) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[0];
+  } else if (param.scale.ndim() == 2) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[1];
+  } else if (param.scale.ndim() == 4) {
+scaleArr[0] = param.scale[2];
 
 Review comment:
   I added a comment--I kept most of the previous implementation before tuple 
scales were supported, and I believe this section is used to figure out the 
height/width scaling values when the multi-input mode is not activated 
(num_args <= 1) . However, because this code block was repeated multiple times 
throughout upsampling-inl.h and upsampling.cc, I put it into a function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-19 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r315438205
 
 

 ##
 File path: src/operator/nn/upsampling-inl.h
 ##
 @@ -84,6 +86,23 @@ struct UpSamplingParam : public 
dmlc::Parameter {
   }
 };  // struct UpSamplingParam
 
+inline int* scaleComp(const UpSamplingParam ) {
+  int* scaleArr = new int[2];
+  scaleArr[0] = 1;
+  scaleArr[1] = 1;
+  if (param.scale.ndim() == 1) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[0];
+  } else if (param.scale.ndim() == 2) {
+scaleArr[0] = param.scale[0];
+scaleArr[1] = param.scale[1];
+  } else if (param.scale.ndim() == 4) {
+scaleArr[0] = param.scale[2];
 
 Review comment:
   I added a comment--I kept most of the previous implementation before this 
tuple scales were supported, and I believe this section is used to figure out 
the height/width scaling values when the multi-input mode is not activated.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-19 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r315326230
 
 

 ##
 File path: python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
 ##
 @@ -2047,3 +2047,29 @@ def convert_broadcast_to(node, **kwargs):
 )
 
 return [tensor_node, expand_node]
+
+@mx_op.register("UpSampling")
+def convert_upsample(node, **kwargs):
+"""Map MXNet's UpSampling operator attributes to onnx's Upsample operator
+and return the created node.
+"""
+name, input_nodes, attrs = get_inputs(node, kwargs)
+
+sample_type = attrs.get('sample_type', 'nearest')
+sample_type = 'linear' if sample_type == 'bilinear' else sample_type
+scale = convert_string_to_list(attrs.get('scale'))
+scaleh = scalew = float(scale[0])
+if len(scale) > 1:
+scaleh = float(scale[0])
+scalew = float(scale[1])
 
 Review comment:
   If an int scale is provided, both scaleh and scalew are given that value. 
However, if a tuple of scales is given, scaleh is assigned the first value and 
scalew is assigned the second value.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] benhe2011 commented on a change in pull request #15811: [MXNET-891] Support tuple of scales in upsample operator

2019-08-09 Thread GitBox
benhe2011 commented on a change in pull request #15811: [MXNET-891] Support 
tuple of scales in upsample operator
URL: https://github.com/apache/incubator-mxnet/pull/15811#discussion_r312660196
 
 

 ##
 File path: python/mxnet/contrib/onnx/mx2onnx/_op_translations.py
 ##
 @@ -2047,3 +2047,29 @@ def convert_broadcast_to(node, **kwargs):
 )
 
 return [tensor_node, expand_node]
+
+@mx_op.register("UpSampling")
+def convert_upsample(node, **kwargs):
+"""Map MXNet's UpSampling operator attributes to onnx's Upsample operator
+and return the created node.
+"""
+name, input_nodes, attrs = get_inputs(node, kwargs)
+
+sample_type = attrs.get('sample_type', 'nearest')
+sample_type = 'linear' if sample_type == 'bilinear' else sample_type
+scale = convert_string_to_list(attrs.get('scale'))
+scaleh = scalew = float(scale[0])
+if len(scale) > 1:
+scaleh = float(scale[0])
+scalew = float(scale[1])
 
 Review comment:
   should be index 0 i believe--ill change the code to be more clear.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services