[GitHub] [tvm] insop edited a comment on issue #7176: OpNotImplemented: The following operators are not supported for frontend ONNX: Softplus

2020-12-31 Thread GitBox


insop edited a comment on issue #7176:
URL: https://github.com/apache/tvm/issues/7176#issuecomment-753245149


   `Softplus` is added in 12/10/2020 from this 
https://github.com/apache/tvm/pull/7089
   - 
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/relay/frontend/onnx.py#L2102
   
   @FelixFu520 , you might want to pull the latest `main` and re-try
   
   
   
   @junrushao1994 @jwfromm 
   
   However, I see that there were `SoftPlus` (not ethe **P** is in upper case) 
was already in.
   According to [Onnx 
spec](https://github.com/onnx/onnx/blob/master/docs/Operators.md), it is 
`Softplus` not `SoftPlus`.
   I am not sure we need to keep them both (Softplus and SoftPlus).
   
   I have a branch that removed `SoftPlus`, let me know I can create a PR.
   
https://github.com/insop/incubator-tvm/commit/1e944644680188f31ada93a7c4ec7de797a1a0e1.patch
   
   ```
   From 1e944644680188f31ada93a7c4ec7de797a1a0e1 Mon Sep 17 00:00:00 2001
   From: Insop Song 
   Date: Thu, 31 Dec 2020 18:53:33 -0800
   Subject: [PATCH] Remove seemingly invalid SoftPlus
   
   - `Softplus` is added in 12/10/2020 from this 
https://github.com/apache/tvm/pull/7089
   - However, I see that there were `SoftPlus` (not the P is in capital) was 
already in.
   According to [Onnx 
spec](https://github.com/onnx/onnx/blob/master/docs/Operators.md), it is 
`Softplus` not `SoftPlus`.
   ---
python/tvm/relay/frontend/onnx.py  | 9 -
tests/python/frontend/onnx/test_forward.py | 1 -
2 files changed, 10 deletions(-)
   
   diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
   index 6122c81d321..1c544d30971 100644
   --- a/python/tvm/relay/frontend/onnx.py
   +++ b/python/tvm/relay/frontend/onnx.py
   @@ -932,14 +932,6 @@ def _impl_v1(cls, inputs, attr, params):
return _op.tanh(_expr.const(beta) * inputs[0]) * _expr.const(alpha)


   -class SoftPlus(OnnxOpConverter):
   -"""Operator converter for SoftPlus."""
   -
   -@classmethod
   -def _impl_v1(cls, inputs, attr, params):
   -return _op.log(_op.exp(inputs[0]) + _expr.const(1.0))
   -
   -
class Softsign(OnnxOpConverter):
"""Operator converter for Softsign."""

   @@ -2661,7 +2653,6 @@ def _get_convert_map(opset):
"OneHot": OneHot.get_converter(opset),
# 'Hardmax'
"Softsign": Softsign.get_converter(opset),
   -"SoftPlus": SoftPlus.get_converter(opset),
"Gemm": Gemm.get_converter(opset),
"MatMul": MatMul.get_converter(opset),
"Mod": Mod.get_converter(opset),
   diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
   index 33dd048896b..3d95a9a83ee 100644
   --- a/tests/python/frontend/onnx/test_forward.py
   +++ b/tests/python/frontend/onnx/test_forward.py
   @@ -1983,7 +1983,6 @@ def verify_single_ops(op, x, out_np, rtol=1e-5, 
atol=1e-5):
verify_single_ops("Tanh", x, np.tanh(x))
verify_single_ops("Sigmoid", x, 1 / (1 + np.exp(-x)))
verify_single_ops("Softsign", x, x / (1 + np.abs(x)))
   -verify_single_ops("SoftPlus", x, np.log(1 + np.exp(x)))


@tvm.testing.uses_gpu
   ```
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] insop commented on issue #7176: OpNotImplemented: The following operators are not supported for frontend ONNX: Softplus

2020-12-31 Thread GitBox


insop commented on issue #7176:
URL: https://github.com/apache/tvm/issues/7176#issuecomment-753245149


   `Softplus` is added in 12/10/2020 from this 
https://github.com/apache/tvm/pull/7089
   - 
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/relay/frontend/onnx.py#L2102
   
   @junrushao1994 @jwfromm 
   
   However, I see that there were `SoftPlus` (not ethe **P** is in upper case) 
was already in.
   According to [Onnx 
spec](https://github.com/onnx/onnx/blob/master/docs/Operators.md), it is 
`Softplus` not `SoftPlus`.
   I am not sure we need to keep them both (Softplus and SoftPlus).
   
   I have a branch that removed `SoftPlus`, let me know I can create a PR.
   
https://github.com/insop/incubator-tvm/commit/1e944644680188f31ada93a7c4ec7de797a1a0e1.patch
   
   ```
   From 1e944644680188f31ada93a7c4ec7de797a1a0e1 Mon Sep 17 00:00:00 2001
   From: Insop Song 
   Date: Thu, 31 Dec 2020 18:53:33 -0800
   Subject: [PATCH] Remove seemingly invalid SoftPlus
   
   - `Softplus` is added in 12/10/2020 from this 
https://github.com/apache/tvm/pull/7089
   - However, I see that there were `SoftPlus` (not the P is in capital) was 
already in.
   According to [Onnx 
spec](https://github.com/onnx/onnx/blob/master/docs/Operators.md), it is 
`Softplus` not `SoftPlus`.
   ---
python/tvm/relay/frontend/onnx.py  | 9 -
tests/python/frontend/onnx/test_forward.py | 1 -
2 files changed, 10 deletions(-)
   
   diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
   index 6122c81d321..1c544d30971 100644
   --- a/python/tvm/relay/frontend/onnx.py
   +++ b/python/tvm/relay/frontend/onnx.py
   @@ -932,14 +932,6 @@ def _impl_v1(cls, inputs, attr, params):
return _op.tanh(_expr.const(beta) * inputs[0]) * _expr.const(alpha)


   -class SoftPlus(OnnxOpConverter):
   -"""Operator converter for SoftPlus."""
   -
   -@classmethod
   -def _impl_v1(cls, inputs, attr, params):
   -return _op.log(_op.exp(inputs[0]) + _expr.const(1.0))
   -
   -
class Softsign(OnnxOpConverter):
"""Operator converter for Softsign."""

   @@ -2661,7 +2653,6 @@ def _get_convert_map(opset):
"OneHot": OneHot.get_converter(opset),
# 'Hardmax'
"Softsign": Softsign.get_converter(opset),
   -"SoftPlus": SoftPlus.get_converter(opset),
"Gemm": Gemm.get_converter(opset),
"MatMul": MatMul.get_converter(opset),
"Mod": Mod.get_converter(opset),
   diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
   index 33dd048896b..3d95a9a83ee 100644
   --- a/tests/python/frontend/onnx/test_forward.py
   +++ b/tests/python/frontend/onnx/test_forward.py
   @@ -1983,7 +1983,6 @@ def verify_single_ops(op, x, out_np, rtol=1e-5, 
atol=1e-5):
verify_single_ops("Tanh", x, np.tanh(x))
verify_single_ops("Sigmoid", x, 1 / (1 + np.exp(-x)))
verify_single_ops("Softsign", x, x / (1 + np.abs(x)))
   -verify_single_ops("SoftPlus", x, np.log(1 + np.exp(x)))


@tvm.testing.uses_gpu
   ```
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi closed issue #7129: [Relay][Documentation] Behavior of relay.stack does not appear to match documentation

2020-12-31 Thread GitBox


masahi closed issue #7129:
URL: https://github.com/apache/tvm/issues/7129


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi closed issue #6633: [FRONTEND][ONNX] Support NonMaxSuppression

2020-12-31 Thread GitBox


masahi closed issue #6633:
URL: https://github.com/apache/tvm/issues/6633


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] Laurawly commented on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


Laurawly commented on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-753026936


   > Do we allow `data.dtype == "float32"`?
   > The problem comes from this downcast, which may cause accuracy problem.
   > 
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73
   
   Good catch, just removed it.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on pull request #7045: [Arith] Simplify cast

2020-12-31 Thread GitBox


tqchen commented on pull request #7045:
URL: https://github.com/apache/tvm/pull/7045#issuecomment-752985049


   Thanks @hzfan for keep polishing the code :) Arith analysis is quite core to 
most of our transformations so it is important to make sure code is clean and 
readable
   
   I have carefully read the PR and add a few more comments to improve 
readability.
   
   @junrushao1994 @yzhliu It would be great to also have you take another look 
as well.
   
   
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] tqchen commented on a change in pull request #7045: [Arith] Simplify cast

2020-12-31 Thread GitBox


tqchen commented on a change in pull request #7045:
URL: https://github.com/apache/tvm/pull/7045#discussion_r550500956



##
File path: src/arith/canonical_simplify.cc
##
@@ -77,6 +77,25 @@ inline PrimExpr DivImpl(PrimExpr a, PrimExpr b, DivMode 
mode) {
   }
 }
 
+bool CheckCastImpl(DataType dtype, PrimExpr value, Analyzer* analyzer) {

Review comment:
   Perhaps rename to UpcastIsSafe?

##
File path: src/arith/canonical_simplify.cc
##
@@ -128,6 +147,50 @@ class SplitExprNode : public CanonicalExprNode {
 
   void MulToSelf(int64_t scale) { this->scale *= scale; }
 
+  /*!
+   * \brief check if cast(dtype, self) is safe
+   * \param dtype The target datatype
+   * \param analyzer The analyzer
+   * \return whether the cast is safe or not
+   */
+  bool CheckCast(DataType dtype, Analyzer* analyzer) const {

Review comment:
   CanPushCastToChildren

##
File path: src/arith/canonical_simplify.cc
##
@@ -77,6 +77,25 @@ inline PrimExpr DivImpl(PrimExpr a, PrimExpr b, DivMode 
mode) {
   }
 }
 
+bool CheckCastImpl(DataType dtype, PrimExpr value, Analyzer* analyzer) {
+  if (!IsIndexType(dtype)) {
+return false;
+  }
+  ConstIntBound bound = analyzer->const_int_bound(value);
+  int64_t ubound = Downcast(max_value(dtype))->value;
+  int64_t lbound = Downcast(min_value(dtype))->value;
+  if (value.dtype().bits() <= dtype.bits() ||  // upcast is safe
+  (bound->max_value <= ubound && bound->min_value >= lbound)) {
+return true;
+  }
+  return false;
+}
+
+#define TVM_CHECK_CANONICAL_SIMPLIFY_CAST(DTYPE, VALUE) \
+  if (!CheckCastImpl(DTYPE, VALUE, analyzer)) { \

Review comment:
   inline the function call, while macro makes the code a bit more concise, 
it makes the code itself harder to understand.

##
File path: src/arith/canonical_simplify.cc
##
@@ -77,6 +77,25 @@ inline PrimExpr DivImpl(PrimExpr a, PrimExpr b, DivMode 
mode) {
   }
 }
 
+bool CheckCastImpl(DataType dtype, PrimExpr value, Analyzer* analyzer) {

Review comment:
   document this function

##
File path: src/arith/canonical_simplify.cc
##
@@ -255,6 +318,61 @@ class SumExprNode : public CanonicalExprNode {
 
   void AddToSelf(const SumExpr& other, int64_t scale);
 
+  /*!
+   * \brief check if cast(dtype, self) is safe
+   * \param dtype The target datatype
+   * \param analyzer The analyzer
+   * \return whether the cast is safe or not
+   */
+  bool CheckCast(DataType dtype, Analyzer* analyzer) const {
+// cast(dtype, arg_1 + arg_2 + ... arg_n) ==
+// cast(dtype, arg_1) + ... + cast(dtype, arg_n)
+// iff it is an upcast (dtype.bits >= self.dtype.bits) or all of
+// its intermediate results fit in the range of dtype
+if (dtype.bits() >= this->dtype.bits()) {
+  return true;  // upcast is safe
+}
+PrimExpr res = make_const(dtype, 0);
+for (size_t i = 0; i < args.size(); ++i) {
+  if (args[i]->scale > 0) {
+res = res + args[i]->Normalize();
+TVM_CHECK_CANONICAL_SIMPLIFY_CAST(dtype, res)
+  }
+}
+if (base > 0) {
+  res = res + make_const(dtype, base);
+  TVM_CHECK_CANONICAL_SIMPLIFY_CAST(dtype, res)
+}
+// negative scales follows using sub.
+for (size_t i = 0; i < args.size(); ++i) {
+  if (args[i]->scale < 0) {
+res = res - args[i]->NormalizeWithScale(-1);
+TVM_CHECK_CANONICAL_SIMPLIFY_CAST(dtype, res)
+  }
+}
+if (base < 0) {
+  res = res - make_const(dtype, -base);
+  TVM_CHECK_CANONICAL_SIMPLIFY_CAST(dtype, res)
+}
+for (const auto& arg : args) {
+  if (!arg->CheckCast(dtype, analyzer)) {
+return false;
+  }
+}
+return true;
+  }
+
+  /*!
+   * \brief self = cast(dtype, self)
+   * \param dtype The target datatype
+   */
+  void CastTo(DataType dtype) {

Review comment:
   PushCastToChildren

##
File path: src/arith/canonical_simplify.cc
##
@@ -128,6 +147,50 @@ class SplitExprNode : public CanonicalExprNode {
 
   void MulToSelf(int64_t scale) { this->scale *= scale; }
 
+  /*!
+   * \brief check if cast(dtype, self) is safe
+   * \param dtype The target datatype
+   * \param analyzer The analyzer
+   * \return whether the cast is safe or not
+   */
+  bool CheckCast(DataType dtype, Analyzer* analyzer) const {
+// cast(dtype, index % upper_factor / lower_factor * scale) ==
+// cast(dtype, index) % upper_factor / lower_factor * scale
+// iff it is an upcast (dtype.bits >= self.dtype.bits) or all of
+// its intermediate results fit in the range of dtype
+if (dtype.bits() >= this->dtype.bits()) {
+  return true;  // upcast is safe
+}
+PrimExpr res = this->index;
+if (this->scale == 0) {
+  return true;
+}
+TVM_CHECK_CANONICAL_SIMPLIFY_CAST(dtype, res)
+if (this->upper_factor != SplitExprNode::kPosInf) {
+  res = ModImpl(res, make_const(this->dtype, this->upper_factor), 
div_mode);
+  

[GitHub] [tvm] tqchen commented on a change in pull request #7164: [µTVM] Add documentation

2020-12-31 Thread GitBox


tqchen commented on a change in pull request #7164:
URL: https://github.com/apache/tvm/pull/7164#discussion_r550499883



##
File path: docs/dev/index.rst
##
@@ -396,3 +396,11 @@ Security
:maxdepth: 1
 
security
+
+
+microTVM
+-
+.. toctree::

Review comment:
   The main reason is that many figures are binaries, updating figures 
would corresponds to the diff of binaries. Adding binaries as part of the repo 
would make the git history itself inevitably large. 
   
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


merrymercy commented on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-752964279


   Do we allow `data.dtype == "float32"`?
   The problem comes from this downcast, which may cause accuracy problem.
   
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy removed a comment on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


merrymercy removed a comment on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-752963716


   Do we allow `data.dtype == "float32"`?
   The problem comes from this downcast, which may cause accuracy problem.
   
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy edited a comment on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


merrymercy edited a comment on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-752963716


   Do we allow `data.dtype == "float32"`?
   The problem comes from this downcast, which may cause accuracy problem.
   
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy edited a comment on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


merrymercy edited a comment on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-752963716


   Should we check `data.dtype == "float16" and weight.dtype == "float16"`?
   The problem comes from this downcast, which may cause accuracy problem.
   
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] merrymercy commented on pull request #7187: [Fix] Tensor core type issue for dense

2020-12-31 Thread GitBox


merrymercy commented on pull request #7187:
URL: https://github.com/apache/tvm/pull/7187#issuecomment-752963716


   Should we check `data.dtype == "float16" and weight.dtype == "float16"`?
   The problem comes from this downcast  
   
https://github.com/apache/tvm/blob/c02c9c528f91f9be3967b7d9ef9f1847f533590b/python/tvm/topi/cuda/dense_tensorcore.py#L72-L73



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org