[GitHub] [incubator-tvm] masahi commented on issue #4799: [QNN] Doc fix on convolution and dequantize

2020-01-31 Thread GitBox
masahi commented on issue #4799: [QNN] Doc fix on convolution and dequantize
URL: https://github.com/apache/incubator-tvm/pull/4799#issuecomment-580980672
 
 
   this is the error I get if I leave out kernel_size:
   ```
 File "/home/masa/projects/dev/tvm/python/tvm/_ffi/_ctypes/function.py", 
line 72, in cfun
   rv = local_pyfunc(*pyargs)
 File "/home/masa/projects/dev/tvm/python/tvm/relay/op/nn/_nn.py", line 
269, in alter_op_layout_conv2d
   return topi.nn.conv2d_alter_layout(attrs, inputs, tinfos, op)
 File 
"",
 line 2, in conv2d_alter_layout
 File "/home/masa/projects/dev/tvm/python/tvm/target.py", line 382, in 
dispatch_func
   return dispatch_dict[k](*args, **kwargs)
 File 
"/home/masa/projects/dev/tvm/topi/python/topi/x86/conv2d_alter_op.py", line 45, 
in _alter_conv2d_layout
   kh, kw = attrs.get_int_tuple("kernel_size")
 File "/home/masa/projects/dev/tvm/python/tvm/attrs.py", line 63, in 
get_int_tuple
   return tuple(x.value for x in self.__getattr__(key))
   TypeError: 'NoneType' object is not iterable
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] masahi opened a new pull request #4799: [QNN] Doc fix on convolution and dequantize

2020-01-31 Thread GitBox
masahi opened a new pull request #4799: [QNN] Doc fix on convolution and 
dequantize
URL: https://github.com/apache/incubator-tvm/pull/4799
 
 
   cc @anijain2305 @vinx13 can you check if this change is correct? I'm not 
completely sure if kernel_size is required, but I hit error if I leave it none.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 merged pull request #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
icemelon9 merged pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on issue #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
icemelon9 commented on issue #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#issuecomment-580976944
 
 
   Thanks @alexgl-github . This is now merged.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (90b2a1e -> cf173fd)

2020-01-31 Thread haichen
This is an automated email from the ASF dual-hosted git repository.

haichen pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 90b2a1e  [Relay][Topi] Use SimplifyInference for L2 Normazlization. 
(#4795)
 add cf173fd  Add schedule for conv3d NDHWC layout (#4775)

No new revisions were added by this update.

Summary of changes:
 topi/python/topi/nn/conv3d.py| 14 +++
 topi/python/topi/x86/__init__.py |  1 +
 topi/python/topi/x86/conv3d.py   | 82 
 3 files changed, 90 insertions(+), 7 deletions(-)
 create mode 100644 topi/python/topi/x86/conv3d.py



[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4798: [QNN] Optimize lowering for requantize and FixedPointMultiply.

2020-01-31 Thread GitBox
anijain2305 opened a new pull request #4798: [QNN] Optimize lowering for 
requantize and FixedPointMultiply.
URL: https://github.com/apache/incubator-tvm/pull/4798
 
 
   As Title.
   
   Changes are verified through existing tests.
   
   @jackwish @FrozenGene @yzhliu @vinx13 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4797: [AutoTVM] Minor bug fixes in AutoTVM for QNN graphs

2020-01-31 Thread GitBox
anijain2305 opened a new pull request #4797: [AutoTVM] Minor bug fixes in 
AutoTVM for QNN graphs
URL: https://github.com/apache/incubator-tvm/pull/4797
 
 
   I encountered many bugs during autotuning, both kernel and graph, for a QNN 
graph. This PR adds fixes all the minor bugs.
   
   There is one major implementation remaining for fixing the graph tuner. More 
on that in a separate PR.
   
   @icemelon9 @yzhliu @kevinthesun 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] wweic commented on issue #4628: [Object] Add String container

2020-01-31 Thread GitBox
wweic commented on issue #4628: [Object] Add String container
URL: https://github.com/apache/incubator-tvm/pull/4628#issuecomment-580966470
 
 
   @tqchen I'll send new revision soon.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4644: [WIP] Relay op strategy

2020-01-31 Thread GitBox
tqchen commented on issue #4644: [WIP] Relay op strategy
URL: https://github.com/apache/incubator-tvm/pull/4644#issuecomment-580955527
 
 
   cc @merrymercy @vinx13 @ZihengJiang @jwfromm please help review if you have 
time


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r373715242
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,85 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from .. import generic, tag
+from ..util import traverse_inline
+
+@generic.schedule_conv3d_ndhwc.register("cpu")
+def schedule_conv3d_ndhwc(outs):
+"""TOPI schedule callback for conv3d
+
+Parameters
+--
+outs: Array of Tensor
+The computation graph description of conv3d
+in the format of an array of tensors.
+
+Returns
+---
+s: Schedule
+The computation schedule for conv3d.
+"""
+s = tvm.create_schedule([x.op for x in outs])
+output_op = outs[0].op
+scheduled_ops = []
 
 Review comment:
   Remove this line since it's useless now


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r373715263
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,85 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from .. import generic, tag
+from ..util import traverse_inline
+
+@generic.schedule_conv3d_ndhwc.register("cpu")
+def schedule_conv3d_ndhwc(outs):
+"""TOPI schedule callback for conv3d
+
+Parameters
+--
+outs: Array of Tensor
+The computation graph description of conv3d
+in the format of an array of tensors.
+
+Returns
+---
+s: Schedule
+The computation schedule for conv3d.
+"""
+s = tvm.create_schedule([x.op for x in outs])
+output_op = outs[0].op
+scheduled_ops = []
+
+def _traverse(op):
+"""Traverse operators from computation graph"""
+if op in s.outputs and tag.is_broadcast(op.tag) and len(op.axis) == 5:
+# schedule bias + bn + relu
+n, d, h, w, c = op.axis
+fused = s[op].fuse(n, d, h, w)
+s[op].parallel(fused)
+s[op].vectorize(c)
+
+if 'conv3d_ndhwc' in op.tag:
+conv = op.output(0)
+kernel = op.input_tensors[1]
+# dilation stage
+if isinstance(kernel.op, tvm.tensor.ComputeOp) and "dilate" in 
kernel.op.tag:
+s[kernel].compute_inline()
+
+# padding stage
+data = op.input_tensors[0]
+data_pad = None
+if isinstance(data.op, tvm.tensor.ComputeOp) and "pad" in 
data.op.tag:
+# fuse pad h and w
+data_pad = data
+data = data_pad.op.input_tensors[0]
+_, _, h_pad, w_pad, _ = data_pad.op.axis
+pad_fused = s[data_pad].fuse(h_pad, w_pad)
+s[data_pad].parallel(pad_fused)
+
+# compute conv
+C = conv
+n, d, h, w, c = s[C].op.axis
+s[C].vectorize(c)
+if op != output_op: # fuse bias + bn + activation
+_, _, _, _, c_out = output_op.axis
+s[C].compute_at(s[output_op], c_out)
+else:
+# fuse batch, depth, height axes
+fused = s[C].fuse(n, d, h)
+s[C].parallel(fused)
+
+scheduled_ops.append(op)
 
 Review comment:
   remove this line


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
alexgl-github commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r373714466
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from .. import generic, tag
+
+@generic.schedule_conv3d_ndhwc.register("cpu")
+def schedule_conv3d_ndhwc(outs):
+"""TOPI schedule callback for conv3d
+
+Parameters
+--
+outs: Array of Tensor
+The computation graph description of conv3d
+in the format of an array of tensors.
+
+Returns
+---
+s: Schedule
+The computation schedule for conv3d.
+"""
+s = tvm.create_schedule([x.op for x in outs])
+output_op = outs[0].op
+scheduled_ops = []
+
+def traverse(op):
+"""Traverse operators from computation graph"""
+# inline all one-to-one-mapping operators except the last stage 
(output)
+if tag.is_broadcast(op.tag):
+if op not in s.outputs:
+s[op].compute_inline()
+else: # inject custom schedule
+if len(op.axis) == 5:
+# schedule bias + bn + activation
+n, d, h, w, c = op.axis
+fused = s[op].fuse(n, d, h, w)
+s[op].parallel(fused)
+s[op].vectorize(c)
+for tensor in op.input_tensors:
+if isinstance(tensor.op, tvm.tensor.ComputeOp) and tensor.op 
not in scheduled_ops:
+traverse(tensor.op)
+
+if 'conv3d_ndhwc' in op.tag:
+conv = op.output(0)
+kernel = op.input_tensors[1]
+# dilation stage
+if isinstance(kernel.op, tvm.tensor.ComputeOp) and "dilate" in 
kernel.op.tag:
+s[kernel].compute_inline()
+
+# padding stage
+data = op.input_tensors[0]
+data_pad = None
+if isinstance(data.op, tvm.tensor.ComputeOp) and "pad" in 
data.op.tag:
+# fuse pad h and w
+data_pad = data
+data = data_pad.op.input_tensors[0]
+_, _, h_pad, w_pad, _ = data_pad.op.axis
+pad_fused = s[data_pad].fuse(h_pad, w_pad)
+s[data_pad].parallel(pad_fused)
+
+# compute conv
+C = conv
+n, d, h, w, c = s[C].op.axis
+s[C].vectorize(c)
+if op != output_op: # fuse bias + bn + activation
+_, _, _, _, c_out = output_op.axis
+s[C].compute_at(s[output_op], c_out)
+else:
+# fuse batch, depth, height axes
+fused = s[C].fuse(n, d, h)
+s[C].parallel(fused)
+
+scheduled_ops.append(op)
+
+traverse(output_op)
 
 Review comment:
   @icemelon9 Changed to traverse_inline, thanks


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] yzhliu merged pull request #4795: [Relay][Topi] Use SimplifyInference for L2 Normalization.

2020-01-31 Thread GitBox
yzhliu merged pull request #4795: [Relay][Topi] Use SimplifyInference for L2 
Normalization.
URL: https://github.com/apache/incubator-tvm/pull/4795
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-tvm] branch master updated (10f85d0 -> 90b2a1e)

2020-01-31 Thread liuyizhi
This is an automated email from the ASF dual-hosted git repository.

liuyizhi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 10f85d0  Dedup BindParamByName function in VM compiler (#4793)
 add 90b2a1e  [Relay][Topi] Use SimplifyInference for L2 Normazlization. 
(#4795)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/op/nn/_nn.py  | 16 
 src/relay/pass/pattern_util.h  |  5 +++
 src/relay/pass/simplify_inference.cc   | 27 ++---
 topi/include/topi/cuda/normalization.h | 49 ---
 topi/include/topi/nn/l2_normalize.h| 72 --
 topi/include/topi/rocm/normalization.h | 11 --
 topi/python/topi/cuda/__init__.py  |  2 +-
 topi/python/topi/cuda/nn.py| 19 -
 topi/python/topi/generic/nn.py | 18 -
 topi/python/topi/nn/__init__.py|  1 -
 topi/python/topi/nn/l2_normalize.py| 45 -
 topi/python/topi/rocm/nn.py|  6 ---
 topi/src/topi.cc   | 17 
 topi/tests/python/test_topi_l2norm.py  | 63 -
 14 files changed, 28 insertions(+), 323 deletions(-)
 delete mode 100644 topi/include/topi/nn/l2_normalize.h
 delete mode 100644 topi/python/topi/nn/l2_normalize.py
 delete mode 100644 topi/tests/python/test_topi_l2norm.py



[GitHub] [incubator-tvm] yzhliu commented on issue #4795: [Relay][Topi] Use SimplifyInference for L2 Normalization.

2020-01-31 Thread GitBox
yzhliu commented on issue #4795: [Relay][Topi] Use SimplifyInference for L2 
Normalization.
URL: https://github.com/apache/incubator-tvm/pull/4795#issuecomment-580939287
 
 
   Thanks @anijain2305 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4790: Fast exponent

2020-01-31 Thread GitBox
tqchen commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373702986
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,66 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+/*
+ * \brief Fast exponential function implementation
+ * log2(e^x) = x * log2(e) * log2(2) =>
+ * log2(e^x) = log2(2^(x*log2(e))) =>
+ * e^x = 2^(x*log2(e))
+ * Splitting power x*log2(e) into integer and fractional parts:
+ * e^(n+f) = e^n * e^f
+ * n = floor(x*log2(e) + 1/2)
+ * f = x - n * ln(2)
+ * exp(x) = 2^n * exp(y)
+ * Approximation for fractional part:
+ * y = exp(f) = 1 + 2 * P(x**2)/(Q(x**2) - P(x**2))
+ */
+inline Tensor fast_exp(const Tensor& _x,
+   std::string name,
+   std::string tag) {
+  auto x_hi = make_const(DataType::Float(32), 88.3762626647950f);
+  auto x_lo = make_const(DataType::Float(32), -88.3762626647949f);
+  auto log2e = make_const(DataType::Float(32), 1.44269504088896341f);
+  auto ln2 = make_const(DataType::Float(32), 0.6931471805599453f);
+  PrimExpr p[6] = {make_const(DataType::Float(32), 1.9875691500E-4f),
+   make_const(DataType::Float(32), 1.3981999507E-3f),
+   make_const(DataType::Float(32), 8.3334519073E-3f),
+   make_const(DataType::Float(32), 4.1665795894E-2f),
+   make_const(DataType::Float(32), 1.665459E-1f),
+   make_const(DataType::Float(32), 5.001201E-1f)};
+  auto one = make_const(DataType::Float(32), 1.0f);
+  auto one_half = make_const(DataType::Float(32), 0.5f);
+  auto b = make_const(DataType::Float(32), 127.0f);
+
+  return compute(_x->shape,
+ [&](const Array& i) {
+   // clamp x
+   auto x = ::tvm::max(::tvm::min(_x(i), x_hi), x_lo);
+   // integer part
+   auto n = ::tvm::floor(x * log2e + one_half);
+   // fractional part
+   auto f = x - n * ln2;
+   auto y = (p[0] * f + p[1]) * f + p[2]) * f + p[3])* f+ 
p[4]) * f
+ + p[5]) * f* f + f + one;
+   // Return 2^m * exp(r).
+   auto ef = tvm::reinterpret(DataType::Float(32),
+  ::tvm::cast(DataType::Int(32), n 
+ b) << 23);
+   return ::tvm::max(ef * y, _x(i)); // NOLINT(*)
+ },
+ name, tag);
+}
+
+
+inline Tensor exp(const Tensor& x,
 
 Review comment:
   please add doxygen comments for the function


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4790: Fast exponent

2020-01-31 Thread GitBox
tqchen commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373702873
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,66 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+/*
+ * \brief Fast exponential function implementation
 
 Review comment:
   please add detailed comments about rguments


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on issue #4628: [Object] Add String container

2020-01-31 Thread GitBox
tqchen commented on issue #4628: [Object] Add String container
URL: https://github.com/apache/incubator-tvm/pull/4628#issuecomment-580904651
 
 
   gentle ping


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4796: [QNN] Conv2D with dilation support.

2020-01-31 Thread GitBox
anijain2305 opened a new pull request #4796: [QNN] Conv2D with dilation support.
URL: https://github.com/apache/incubator-tvm/pull/4796
 
 
   Quantized SSD_VGG has a dilated conv. This PR supports better QNN lowering 
for symmetric dilated conv.
   
   Asymmetric dilated conv requires dilated pooling op. If we see a usecase, we 
can add that op. Currently, there is no major usecase.
   
   @vinx13 @FrozenGene @jackwish @yzhliu @yidawang 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj commented on a change in pull request #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-31 Thread GitBox
huajsj commented on a change in pull request #4791: [TOPI] upsample operator 
'NCHWinic' format support.
URL: https://github.com/apache/incubator-tvm/pull/4791#discussion_r373667310
 
 

 ##
 File path: topi/python/topi/image/resize.py
 ##
 @@ -18,8 +18,37 @@
 """TVM operator input resize compute."""
 from __future__ import absolute_import
 import tvm
+from topi.util import nchw_pack_layout, nchw_xc_layout
 from .. import tag
 
+def get_2d_indices(indices, layout='NCHW'):
+""" Get 2d indices """
+(cc, inum, ic) = (0, 0, 0)
+if layout == 'NHWC':
+n, y, x, c = indices
+cc = None
+elif layout == 'NCHW':
+n, c, y, x = indices
+cc = None
+elif nchw_pack_layout(layout):
+n, c, y, x, inum, ic = indices
+else:
+n, c, y, x, cc = indices
+return n, c, y, x, cc, inum, ic
+
+def get_2d_pixel(data, layout, boxes, image_height, image_width, n, c, y, x, 
cc, ib, ic):
+""" Get 2d pixel """
+if boxes is None:
+y = tvm.max(tvm.min(y, image_height - 1), 0)
+x = tvm.max(tvm.min(x, image_width - 1), 0)
+if layout == 'NHWC':
+return data(n, y, x, c).astype('float')
+if layout == 'NCHW':
+return data(n, c, y, x).astype('float')
+if nchw_pack_layout(layout):
+return data(n, c, y, x, ib, ic).astype('float')
+# else must be NCHWxc
 
 Review comment:
   Hi @tmoreau89 , thanks for the follow up, sure, issue just fixed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on a change in pull request #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-31 Thread GitBox
tmoreau89 commented on a change in pull request #4791: [TOPI] upsample operator 
'NCHWinic' format support.
URL: https://github.com/apache/incubator-tvm/pull/4791#discussion_r373658550
 
 

 ##
 File path: topi/python/topi/image/resize.py
 ##
 @@ -18,8 +18,37 @@
 """TVM operator input resize compute."""
 from __future__ import absolute_import
 import tvm
+from topi.util import nchw_pack_layout, nchw_xc_layout
 from .. import tag
 
+def get_2d_indices(indices, layout='NCHW'):
+""" Get 2d indices """
+(cc, inum, ic) = (0, 0, 0)
+if layout == 'NHWC':
+n, y, x, c = indices
+cc = None
+elif layout == 'NCHW':
+n, c, y, x = indices
+cc = None
+elif nchw_pack_layout(layout):
+n, c, y, x, inum, ic = indices
+else:
+n, c, y, x, cc = indices
+return n, c, y, x, cc, inum, ic
+
+def get_2d_pixel(data, layout, boxes, image_height, image_width, n, c, y, x, 
cc, ib, ic):
+""" Get 2d pixel """
+if boxes is None:
+y = tvm.max(tvm.min(y, image_height - 1), 0)
+x = tvm.max(tvm.min(x, image_width - 1), 0)
+if layout == 'NHWC':
+return data(n, y, x, c).astype('float')
+if layout == 'NCHW':
+return data(n, c, y, x).astype('float')
+if nchw_pack_layout(layout):
+return data(n, c, y, x, ib, ic).astype('float')
+# else must be NCHWxc
 
 Review comment:
   can we assert this


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tmoreau89 commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-31 Thread GitBox
tmoreau89 commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format 
support.
URL: https://github.com/apache/incubator-tvm/pull/4791#issuecomment-580886430
 
 
   @srkreddy1238 could you help review?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-31 Thread GitBox
huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format 
support.
URL: https://github.com/apache/incubator-tvm/pull/4791#issuecomment-580885374
 
 
   Hi @jwfromm, if you have time could you help for a review? thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] anijain2305 opened a new pull request #4795: [Relay][Topi] Use SimplifyInference for L2 Normazlization.

2020-01-31 Thread GitBox
anijain2305 opened a new pull request #4795: [Relay][Topi] Use 
SimplifyInference for L2 Normazlization.
URL: https://github.com/apache/incubator-tvm/pull/4795
 
 
   Reason - Observed that l2_normalize (which is seemingly a sequence of 
element-wise op with one reduce op) was taking close to 15% of time in a 
ssd_vgg network.
   
   This PR converts L2 normalize to a series of Relay expr, for which we have 
well-defined Relay passes and topi schedules. Therefore, the PR also removes 
topi compute/schedules.
   
   @yidawang @yzhliu @tqchen @kazum @PariksheetPinjari909 
   
   No need of extra tests. The change is verified through existing tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format support.

2020-01-31 Thread GitBox
huajsj commented on issue #4791: [TOPI] upsample operator 'NCHWinic' format 
support.
URL: https://github.com/apache/incubator-tvm/pull/4791#issuecomment-580881788
 
 
   Hi @tmoreau89 , this patch related to let VTA to support Yolov3, if you have 
time could you help for a review too?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on issue #4787: [Relay] Conv2D padding representation

2020-01-31 Thread GitBox
icemelon9 commented on issue #4787: [Relay] Conv2D padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#issuecomment-580874675
 
 
   You should also add `get_pad_tuple2d` to these contrib conv2d ops, e.g., 
contrib_conv2d_nchwc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4787: [Relay] Conv2D padding representation

2020-01-31 Thread GitBox
icemelon9 commented on a change in pull request #4787: [Relay] Conv2D padding 
representation
URL: https://github.com/apache/incubator-tvm/pull/4787#discussion_r373643774
 
 

 ##
 File path: python/tvm/relay/op/nn/util.py
 ##
 @@ -0,0 +1,56 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable
+"""NN operator common utilities"""
+from __future__ import absolute_import
+from  import container
+
+def get_pad_tuple2d(padding):
+"""Common code to get the pad option
+Parameters
+--
+padding : Union[int, Tuple[int, ...]]
+Padding size
+Returns
+---
+pad_top : int
+Padding size on top
+pad_left : int
+Padding size on left
+pad_down : int
+Padding size on down.
+pad_right : int
+Padding size on right.
+"""
+# compute the padding size
+if isinstance(padding, container.Array):
+padding = list(padding)
+if isinstance(padding, (tuple, list)):
+if len(padding) == 2:
+pad_h = padding[0] * 2
+pad_w = padding[1] * 2
+elif len(padding) == 4:
+return  padding[0], padding[1], padding[2], padding[3]
 
 Review comment:
   ```suggestion
   return padding[0], padding[1], padding[2], padding[3]
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-31 Thread GitBox
alexwong commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580873337
 
 
   > Reducing all the input sizes might help with memory issues, I don't think 
theres any need to use big 224x224 test data.
   
   I think that would definitely help and worth a try but if single operator 
models are running into issues then I think larger networks would definitely as 
well and all of those require larger input sizes. Still looking into some ways 
to clean up memory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule

2020-01-31 Thread GitBox
icemelon9 commented on a change in pull request #4775: conv3d_ndhwc schedule
URL: https://github.com/apache/incubator-tvm/pull/4775#discussion_r373643228
 
 

 ##
 File path: topi/python/topi/x86/conv3d.py
 ##
 @@ -0,0 +1,92 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-variable, too-many-locals
+# pylint: disable=unused-argument, redefined-builtin, no-else-return
+"""Conv3D operators"""
+import tvm
+from .. import generic, tag
+
+@generic.schedule_conv3d_ndhwc.register("cpu")
+def schedule_conv3d_ndhwc(outs):
+"""TOPI schedule callback for conv3d
+
+Parameters
+--
+outs: Array of Tensor
+The computation graph description of conv3d
+in the format of an array of tensors.
+
+Returns
+---
+s: Schedule
+The computation schedule for conv3d.
+"""
+s = tvm.create_schedule([x.op for x in outs])
+output_op = outs[0].op
+scheduled_ops = []
+
+def traverse(op):
+"""Traverse operators from computation graph"""
+# inline all one-to-one-mapping operators except the last stage 
(output)
+if tag.is_broadcast(op.tag):
+if op not in s.outputs:
+s[op].compute_inline()
+else: # inject custom schedule
+if len(op.axis) == 5:
+# schedule bias + bn + activation
+n, d, h, w, c = op.axis
+fused = s[op].fuse(n, d, h, w)
+s[op].parallel(fused)
+s[op].vectorize(c)
+for tensor in op.input_tensors:
+if isinstance(tensor.op, tvm.tensor.ComputeOp) and tensor.op 
not in scheduled_ops:
+traverse(tensor.op)
+
+if 'conv3d_ndhwc' in op.tag:
+conv = op.output(0)
+kernel = op.input_tensors[1]
+# dilation stage
+if isinstance(kernel.op, tvm.tensor.ComputeOp) and "dilate" in 
kernel.op.tag:
+s[kernel].compute_inline()
+
+# padding stage
+data = op.input_tensors[0]
+data_pad = None
+if isinstance(data.op, tvm.tensor.ComputeOp) and "pad" in 
data.op.tag:
+# fuse pad h and w
+data_pad = data
+data = data_pad.op.input_tensors[0]
+_, _, h_pad, w_pad, _ = data_pad.op.axis
+pad_fused = s[data_pad].fuse(h_pad, w_pad)
+s[data_pad].parallel(pad_fused)
+
+# compute conv
+C = conv
+n, d, h, w, c = s[C].op.axis
+s[C].vectorize(c)
+if op != output_op: # fuse bias + bn + activation
+_, _, _, _, c_out = output_op.axis
+s[C].compute_at(s[output_op], c_out)
+else:
+# fuse batch, depth, height axes
+fused = s[C].fuse(n, d, h)
+s[C].parallel(fused)
+
+scheduled_ops.append(op)
+
+traverse(output_op)
 
 Review comment:
   You can use traverse_inline function to avoid redundant inlining broadcast 
ops.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] jwfromm commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser

2020-01-31 Thread GitBox
jwfromm commented on issue #4497: [WIP] [Relay] Add a PyTorch to Relay Parser
URL: https://github.com/apache/incubator-tvm/pull/4497#issuecomment-580865226
 
 
   Reducing all the input sizes might help with memory issues, I don't think 
theres any need to use big 224x224 test data.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] alexgl-github commented on a change in pull request #4790: Fast exponent

2020-01-31 Thread GitBox
alexgl-github commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373624561
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   Please see updated fast_exp implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] comaniac commented on issue #4787: [Relay] Conv2D padding representation

2020-01-31 Thread GitBox
comaniac commented on issue #4787: [Relay] Conv2D padding representation
URL: https://github.com/apache/incubator-tvm/pull/4787#issuecomment-580836237
 
 
   LGTM.
   @icemelon9 @yzhliu could you help review and merge? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] inadob closed pull request #4696: [Relay][Frontend][TFlite] Add support for quantized LOGISTIC

2020-01-31 Thread GitBox
inadob closed pull request #4696: [Relay][Frontend][TFlite] Add support for 
quantized LOGISTIC
URL: https://github.com/apache/incubator-tvm/pull/4696
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] tqchen commented on a change in pull request #4790: Fast exponent

2020-01-31 Thread GitBox
tqchen commented on a change in pull request #4790: Fast exponent
URL: https://github.com/apache/incubator-tvm/pull/4790#discussion_r373583565
 
 

 ##
 File path: topi/include/topi/elemwise.h
 ##
 @@ -360,5 +359,71 @@ inline Tensor full_like(const Tensor& x,
   }, name, tag);
 }
 
+ /*
+ * \brief Fast exponential function implementation from Eigen
+ * 
https://github.com/eigenteam/eigen-git-mirror/blob/master/Eigen/src/Core/arch/Default/GenericPacketMathFunctions.h#L183
 
 Review comment:
   The code in the main repo need to be licensed under ASv2.
   
   If the code comes from a difference license, We will need to put the code in 
the thirdparty and specify the license clearly. If we reference an existing 
algorithm and is implemented from scratch. It is better to declare ASv2
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-tvm] vizero1 opened a new pull request #4794: Change color channel from BGR to RGB for darknet preprocessing

2020-01-31 Thread GitBox
vizero1 opened a new pull request #4794: Change color channel from BGR to RGB 
for darknet preprocessing
URL: https://github.com/apache/incubator-tvm/pull/4794
 
 
   Thanks for contributing to TVM!   Please refer to guideline 
https://docs.tvm.ai/contribute/ for useful information and tips. After the pull 
request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   
   In the preprocessing for images in darknet we need to change the color 
channel from bgr to rgb.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services