[GitHub] [incubator-tvm] FrozenGene edited a comment on issue #5355: Factor out import of common tflite.Operator in tflite frontend.

2020-04-21 Thread GitBox


FrozenGene edited a comment on issue #5355:
URL: https://github.com/apache/incubator-tvm/pull/5355#issuecomment-617571690


   Thanks @u99127 @siju-samuel  !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on issue #5355: Factor out import of common tflite.Operator in tflite frontend.

2020-04-21 Thread GitBox


FrozenGene commented on issue #5355:
URL: https://github.com/apache/incubator-tvm/pull/5355#issuecomment-617571690


   Thanks @u99127 @u99127 !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Factor out import of common tflite.Operator in tflite frontend. (#5355)

2020-04-21 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 3e3ccce  Factor out import of common tflite.Operator in tflite 
frontend. (#5355)
3e3ccce is described below

commit 3e3ccce1135c25dd1d99dc7c2b8ff589c93ee7ea
Author: Ramana Radhakrishnan 
AuthorDate: Wed Apr 22 07:09:11 2020 +0100

Factor out import of common tflite.Operator in tflite frontend. (#5355)

* Restructure imports in tflite frontend.

These python modules are needed for every tflite file parsed.
Factorize out imports of the common most ones.

Now that the import of operator is common, asserts can be commonized.

Loses 473 lines of duplication.

* Only restrict to tflite.Operator
---
 python/tvm/relay/frontend/tflite.py | 156 ++--
 1 file changed, 5 insertions(+), 151 deletions(-)

diff --git a/python/tvm/relay/frontend/tflite.py 
b/python/tvm/relay/frontend/tflite.py
index d489bd3..a2e0904 100644
--- a/python/tvm/relay/frontend/tflite.py
+++ b/python/tvm/relay/frontend/tflite.py
@@ -159,7 +159,12 @@ class OperatorConverter(object):
 op = self.subgraph.Operators(op_idx)
 op_code_str = self.get_op_code_str(op)
 output_tensors = self.get_output_tensors(op)
+try:
+from tflite.Operator import Operator
+except ImportError:
+raise ImportError("The tflite package must be installed")
 
+assert isinstance(op, Operator)
 ret = self.convert_map[op_code_str](op)
 
 if len(output_tensors) == 1:
@@ -288,12 +293,6 @@ class OperatorConverter(object):
 
 def is_quantized(self, op):
 """Check if an input tensor is quantized."""
-try:
-from tflite.Operator import Operator
-except ImportError:
-raise ImportError("The tflite package must be installed")
-
-assert isinstance(op, Operator)
 input_tensors = self.get_input_tensors(op)
 first_tensor = input_tensors[0]
 return first_tensor.qnn_params is not None
@@ -335,12 +334,10 @@ class OperatorConverter(object):
 """Convert TFLite reshape"""
 try:
 from tflite.BuiltinOptions import BuiltinOptions
-from tflite.Operator import Operator
 from tflite.ReshapeOptions import ReshapeOptions
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
-assert isinstance(op, Operator)
 input_tensors = self.get_input_tensors(op)
 assert input_tensors, "input tensors should not be empty"
 input_tensor = input_tensors[0]
@@ -368,7 +365,6 @@ class OperatorConverter(object):
 """Generic method to Convert TFLite RESIZE operators"""
 try:
 from tflite.BuiltinOptions import BuiltinOptions
-from tflite.Operator import Operator
 from tflite.ResizeBilinearOptions import ResizeBilinearOptions
 # ResizeNearestNeighborOptions was added in tflite v1.13
 tflite_ver = 1120
@@ -378,7 +374,6 @@ class OperatorConverter(object):
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
-assert isinstance(op, Operator)
 input_tensors = self.get_input_tensors(op)
 assert len(input_tensors) == 2, "input tensors length should be 2"
 
@@ -421,14 +416,12 @@ class OperatorConverter(object):
 def convert_l2_normalization(self, op):
 """Convert TFLite L2_NORMALIZATION """
 try:
-from tflite.Operator import Operator
 from tflite.BuiltinOptions import BuiltinOptions
 from tflite.L2NormOptions import L2NormOptions
 from tflite.ActivationFunctionType import ActivationFunctionType
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
-assert isinstance(op, Operator)
 input_tensors = self.get_input_tensors(op)
 assert len(input_tensors) == 1, "input tensors length should be 1"
 input_tensor = input_tensors[0]
@@ -467,13 +460,11 @@ class OperatorConverter(object):
 def convert_lrn(self, op):
 """Convert TFLite LOCAL_RESPONSE_NORMALIZATION """
 try:
-from tflite.Operator import Operator
 from tflite.BuiltinOptions import BuiltinOptions
 from tflite.LocalResponseNormalizationOptions import 
LocalResponseNormalizationOptions
 except ImportError:
 raise ImportError("The tflite package must be installed")
 
-assert isinstance(op, Operator)
 if self.is_quantized(op):
 raise tvm.error.OpNotImplemented(
 'TFlite quantized LRN operator is not supported yet.')

[GitHub] [incubator-tvm] gopinath-r edited a comment on issue #5133: [Torch] A list of missing op conversion in need of help

2020-04-21 Thread GitBox


gopinath-r edited a comment on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-617568268


   @siju-samuel thanks for the reply
   
   These are the values of input and input_types in repeat 
   **input>>> bound method Kernel.raw_input of 

   input_types>>> ['float', 'ListType']
   data>>> tensor([[[0., 1., 2., 3., 4., 5., 6., 7.]]])
   reps>>> [1, CallNode(Op(multiply), [Constant(152), Constant(8)], (nullptr), 
[]), 44]**
   
   Use of torch.repeat can be found at line numbers 141, 144 of bts.py.
   
   I have uploaded the script and necessary files to run bts in gdrive
   
[BTS_TVM](https://drive.google.com/file/d/1ZHYA12rv04hkVZFRI1TMs3Mdt_fPmbGr/view?usp=sharing)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] gopinath-r commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-04-21 Thread GitBox


gopinath-r commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-617568268


   @siju-samuel thanks for the reply
   
   These are the values of input and input_types in repeat 
   **input>>> 
   input_types>>> ['float', 'ListType']
   data>>> tensor([[[0., 1., 2., 3., 4., 5., 6., 7.]]])
   reps>>> [1, CallNode(Op(multiply), [Constant(152), Constant(8)], (nullptr), 
[]), 44]**
   
   Use of torch.repeat can be found at line numbers 141, 144 of bts.py.
   
   I have uploaded the script and necessary files to run bts in gdrive
   
[BTS_TVM](https://drive.google.com/file/d/1ZHYA12rv04hkVZFRI1TMs3Mdt_fPmbGr/view?usp=sharing)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 opened a new pull request #5403: [Fix] Remove the duplicate PrintIR pass in Relay

2020-04-21 Thread GitBox


icemelon9 opened a new pull request #5403:
URL: https://github.com/apache/incubator-tvm/pull/5403


   Fix #5337 
   
   @tqchen @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5337: [WIP][RELAY] Remove re-exports of tvm.transform

2020-04-21 Thread GitBox


tqchen commented on issue #5337:
URL: https://github.com/apache/incubator-tvm/pull/5337#issuecomment-617553469


   Good catch @icemelon9 can you send another PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] icemelon9 commented on issue #5337: [WIP][RELAY] Remove re-exports of tvm.transform

2020-04-21 Thread GitBox


icemelon9 commented on issue #5337:
URL: https://github.com/apache/incubator-tvm/pull/5337#issuecomment-617537724


   @tqchen You forgot to remove PrintIR in the Relay transform header.
   
https://github.com/apache/incubator-tvm/blob/00014e20c3cc077727d467a67d3498260627e4e0/include/tvm/relay/transform.h#L333



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on issue #5380: [KERAS]Minimum & AlphaDropout op support

2020-04-21 Thread GitBox


FrozenGene commented on issue #5380:
URL: https://github.com/apache/incubator-tvm/pull/5380#issuecomment-617534456


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (ef61fd5 -> 24f6865)

2020-04-21 Thread zhaowu
This is an automated email from the ASF dual-hosted git repository.

zhaowu pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from ef61fd5  [LLVM] Use ArrayRef in calls to CreateShuffleVector 
(#5399)
 add 24f6865  [KERAS]Minimum & AlphaDropout op support (#5380)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/keras.py  | 9 +++--
 tests/python/frontend/keras/test_forward.py | 1 +
 2 files changed, 8 insertions(+), 2 deletions(-)



[GitHub] [incubator-tvm] FrozenGene commented on issue #5355: Factor out import of common tflite.Operator in tflite frontend.

2020-04-21 Thread GitBox


FrozenGene commented on issue #5355:
URL: https://github.com/apache/incubator-tvm/pull/5355#issuecomment-617533538


   @siju-samuel could you help to review it again?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


tqchen commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617532455


   i don't know unless we go and dig deeper, but if the bug is reproducible, 
then it should not be hard to find the cause



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hudmgy commented on issue #856: Fail to load .dll under windows

2020-04-21 Thread GitBox


hudmgy commented on issue #856:
URL: https://github.com/apache/incubator-tvm/issues/856#issuecomment-617530826


   > btw, this generated .dll file works well when I call it using tvm's python 
API.
   
   hello, When I generated .dll file using 
"lib.export_library('path/to/deploy_lib.dll')",  I failed with exception
   
   cl: Command line warning D9024 :unrecognized source file 
type“C:\Users\afx-\AppData\Local\Temp\tmprr0gougm\lib0.o”,object file assumed
   cl: Command line warning D9027 :source 
file“C:\Users\afx-\AppData\Local\Temp\tmprr0gougm\lib0.o”ignored
   dllmain.cc
   C:\Users\afx-\AppData\Local\Temp\tmpfj_x3sy6\dllmain.cc(1): warning C4067: 
Unexpected token after preprocessor instruction - newline expected
   C:\Users\afx-\AppData\Local\Temp\tmpfj_x3sy6\dllmain.cc(1): fatal error 
C1034: windows.h:
   
   Have you ever had the same problem.
   thank you
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hudmgy opened a new issue #5402: lib.export_library('path/to/deploy_lib.dll') on windows

2020-04-21 Thread GitBox


hudmgy opened a new issue #5402:
URL: https://github.com/apache/incubator-tvm/issues/5402


   when I ran  "lib.export_library('path/to/deploy_lib.dll')" on windows, I 
failed 
   
   cl: Command line warning D9024 :unrecognized source file 
type“C:\Users\afx-\AppData\Local\Temp\tmprr0gougm\lib0.o”,object file assumed
   cl: Command line warning D9027 :source 
file“C:\Users\afx-\AppData\Local\Temp\tmprr0gougm\lib0.o”ignored
   dllmain.cc
   C:\Users\afx-\AppData\Local\Temp\tmpfj_x3sy6\dllmain.cc(1): warning C4067: 
Unexpected token after preprocessor instruction - newline expected
   C:\Users\afx-\AppData\Local\Temp\tmpfj_x3sy6\dllmain.cc(1): fatal error 
C1034: windows.h: 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5401: Update dmlc-core to latest

2020-04-21 Thread GitBox


tqchen opened a new pull request #5401:
URL: https://github.com/apache/incubator-tvm/pull/5401


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] trivialfis commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


trivialfis commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617509251


   Is it possible that somehow XGBoost linked a wrong dmlc static library?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5357: [Relay] enable blocking format in x86 conv2d and fold scale axis

2020-04-21 Thread GitBox


tqchen commented on issue #5357:
URL: https://github.com/apache/incubator-tvm/pull/5357#issuecomment-617507224


   cc @icemelon9 @yzhliu @anijain2305 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [LLVM] Use ArrayRef in calls to CreateShuffleVector (#5399)

2020-04-21 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new ef61fd5  [LLVM] Use ArrayRef in calls to CreateShuffleVector 
(#5399)
ef61fd5 is described below

commit ef61fd5049eaee6f780fcff5069910fb202ad84c
Author: Krzysztof Parzyszek 
AuthorDate: Tue Apr 21 21:11:28 2020 -0500

[LLVM] Use ArrayRef in calls to CreateShuffleVector (#5399)

This switch was made in LLVM 11. Previously this function was expecting
mask indices of type uint32_t. This variant is now deprecated.
---
 src/target/llvm/codegen_llvm.cc | 12 
 1 file changed, 12 insertions(+)

diff --git a/src/target/llvm/codegen_llvm.cc b/src/target/llvm/codegen_llvm.cc
index 820a20c..a8b0d0e 100644
--- a/src/target/llvm/codegen_llvm.cc
+++ b/src/target/llvm/codegen_llvm.cc
@@ -491,7 +491,11 @@ llvm::Value* CodeGenLLVM::CreateVecSlice(llvm::Value* vec, 
int begin, int extent
 
 llvm::Value* CodeGenLLVM::CreateVecFlip(llvm::Value* vec) {
   int num_elems = static_cast(vec->getType()->getVectorNumElements());
+#if TVM_LLVM_VERSION >= 110
+  std::vector indices;
+#else
   std::vector indices;
+#endif
   for (int i = 0; i < num_elems; ++i) {
 indices.push_back(num_elems - i - 1);
   }
@@ -531,7 +535,11 @@ llvm::Value* 
CodeGenLLVM::CreateVecConcat(std::vector vecs) {
 rhs = CreateVecPad(rhs, lhs_lanes);
   }
   const size_t shared_lanes = std::max(lhs_lanes, rhs_lanes);
+#if TVM_LLVM_VERSION >= 110
+  std::vector mask;
+#else
   std::vector mask;
+#endif
   for (size_t i = 0; i < lhs_lanes; ++i) {
 mask.push_back(i);
   }
@@ -872,7 +880,11 @@ llvm::Value* CodeGenLLVM::CreateIntrinsic(const CallNode* 
op) {
 llvm::Value *v0 = MakeValue(op->args[0]);
 llvm::Value *v1 = MakeValue(op->args[1]);
 int num_elems = static_cast(v0->getType()->getVectorNumElements()) * 
2;
+#if TVM_LLVM_VERSION >= 110
+std::vector indices;
+#else
 std::vector indices;
+#endif
 for (int i = 0; i < num_elems; ++i) {
   indices.push_back(i);
 }



[GitHub] [incubator-tvm] tqchen commented on issue #5398: [LLVM] Replace calls to Type::getVectorNumElements

2020-04-21 Thread GitBox


tqchen commented on issue #5398:
URL: https://github.com/apache/incubator-tvm/pull/5398#issuecomment-617505301


   @kparzysz-quic please rebase against the master



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


tqchen edited a comment on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617494614


   The trace might offer some insights, @hcho3 , couldit caused by 
ConfigureGpuId? also cc @trivialfis since it seems to relates to 
https://github.com/dmlc/xgboost/pull/4961?
   ```
   0x7fffb5ba9e37 in std::vector, 
std::allocator > > 
xgboost::XGBoostParameter::UpdateAllowUnknown, std::allocator > > 
>(std::vector, 
std::allocator > > const&, bool*) ()
  from 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/lib/python3.7/site-packages/xgboost/./lib/libxgboost.so
   (gdb) bt
   #0  0x7fffb5ba9e37 in std::vector, 
std::allocator > > 
xgboost::XGBoostParameter::UpdateAllowUnknown, std::allocator > > 
>(std::vector, 
std::allocator > > const&, bool*) ()
  from 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/lib/python3.7/site-packages/xgboost/./lib/libxgboost.so
   #1  0x7fffb5b970b7 in xgboost::GenericParameter::ConfigureGpuId(bool) ()
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


tqchen commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617494614


   The trace might offer some insights, @hcho3 , couldit caused by 
ConfigureGpuId?
   ```
   0x7fffb5ba9e37 in std::vector, 
std::allocator > > 
xgboost::XGBoostParameter::UpdateAllowUnknown, std::allocator > > 
>(std::vector, 
std::allocator > > const&, bool*) ()
  from 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/lib/python3.7/site-packages/xgboost/./lib/libxgboost.so
   (gdb) bt
   #0  0x7fffb5ba9e37 in std::vector, 
std::allocator > > 
xgboost::XGBoostParameter::UpdateAllowUnknown, std::allocator > > 
>(std::vector, 
std::allocator > > const&, bool*) ()
  from 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/lib/python3.7/site-packages/xgboost/./lib/libxgboost.so
   #1  0x7fffb5b970b7 in xgboost::GenericParameter::ConfigureGpuId(bool) ()
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hcho3 commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


hcho3 commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617492866


   Let me try again with `TVM_FFI=ctypes` environment variable set. What does 
this do?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


areusch edited a comment on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617491801


   I see this reliably with a virtualenv on bionic on AWS.
   
   environment:
   ami: 
`099720109477/ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200408`
   using tvm revision: `72f2aea2dd219bf55c15b3cf4cfc21491f1f60dd`
   
   command: `TVM_FFI=ctypes python3 -m pytest -s -v tests/python/unittest -k 
'test_autotvm_xgboost_model'`
   
   python version
```$ python --version
   Python 3.7.5
   ```
   
   installed python packages:
   ```
   antlr4-python3-runtime==4.8
   Cython==0.29.16
   decorator==4.4.2
   psutil==5.7.0
   pylint==2.4.4
 - astroid [required: >=2.3.0,<2.4, installed: 2.3.3]
   - lazy-object-proxy [required: ==1.4.*, installed: 1.4.3]
   - six [required: ~=1.12, installed: 1.14.0]
   - typed-ast [required: >=1.4.0,<1.5, installed: 1.4.1]
   - wrapt [required: ==1.11.*, installed: 1.11.2]
 - isort [required: >=4.2.5,<5, installed: 4.3.21]
 - mccabe [required: >=0.6,<0.7, installed: 0.6.1]
   pytest==5.4.1
 - attrs [required: >=17.4.0, installed: 19.3.0]
 - importlib-metadata [required: >=0.12, installed: 1.6.0]
   - zipp [required: >=0.5, installed: 3.1.0]
 - more-itertools [required: >=4.0.0, installed: 8.2.0]
 - packaging [required: Any, installed: 20.3]
   - pyparsing [required: >=2.0.2, installed: 2.4.7]
   - six [required: Any, installed: 1.14.0]
 - pluggy [required: >=0.12,<1.0, installed: 0.13.1]
   - importlib-metadata [required: >=0.12, installed: 1.6.0]
 - zipp [required: >=0.5, installed: 3.1.0]
 - py [required: >=1.5.0, installed: 1.8.1]
 - wcwidth [required: Any, installed: 0.1.9]
   tornado==6.0.4
   xgboost==1.0.2
 - numpy [required: Any, installed: 1.18.3]
 - scipy [required: Any, installed: 1.4.1]
   - numpy [required: >=1.13.3, installed: 1.18.3]
   ```
   
   backtrace:
   ```
   ~/ws/tvm$ gdb python
   GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
   Copyright (C) 2018 Free Software Foundation, Inc.
   License GPLv3+: GNU GPL version 3 or later 
   This is free software: you are free to change and redistribute it.
   There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
   and "show warranty" for details.
   This GDB was configured as "x86_64-linux-gnu".
   Type "show configuration" for configuration details.
   For bug reporting instructions, please see:
   .
   Find the GDB manual and other documentation resources online at:
   .
   For help, type "help".
   Type "apropos word" to search for commands related to "word"...
   Reading symbols from python...(no debugging symbols found)...done.
   (gdb) run tests/python/unittest/test_autotvm_xgboost_model.py
   Starting program: 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/bin/python 
tests/python/unittest/test_autotvm_xgboost_model.py
   [Thread debugging using libthread_db enabled]
   Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
   [New Thread 0x74104700 (LWP 11905)]
   [New Thread 0x73903700 (LWP 11906)]
   [New Thread 0x7fffef102700 (LWP 11907)]
   [New Thread 0x7fffec901700 (LWP 11908)]
   [New Thread 0x7fffea100700 (LWP 11909)]
   [New Thread 0x7fffe78ff700 (LWP 11910)]
   [New Thread 0x7fffe50fe700 (LWP 11911)]
   [New Thread 0x7fffe28fd700 (LWP 11912)]
   [New Thread 0x7fffe00fc700 (LWP 11913)]
   [New Thread 0x7fffdd8fb700 (LWP 11914)]
   [New Thread 0x7fffdb0fa700 (LWP 11915)]
   [New Thread 0x7fffd88f9700 (LWP 11916)]
   [New Thread 0x7fffd60f8700 (LWP 11917)]
   [New Thread 0x7fffd38f7700 (LWP 11918)]
   [New Thread 0x7fffd10f6700 (LWP 11919)]
   [Thread 0x7fffe50fe700 (LWP 11911) exited]
   [Thread 0x7fffd88f9700 (LWP 11916) exited]
   [Thread 0x7fffd10f6700 (LWP 11919) exited]
   [Thread 0x7fffd60f8700 (LWP 11917) exited]
   [Thread 0x7fffdb0fa700 (LWP 11915) exited]
   [Thread 0x7fffdd8fb700 (LWP 11914) exited]
   [Thread 0x7fffe00fc700 (LWP 11913) exited]
   [Thread 0x7fffe28fd700 (LWP 11912) exited]
   [Thread 0x7fffe78ff700 (LWP 11910) exited]
   [Thread 0x7fffea100700 (LWP 11909) exited]
   [Thread 0x7fffec901700 (LWP 11908) exited]
   [Thread 0x7fffef102700 (LWP 11907) exited]
   [Thread 0x73903700 (LWP 11906) exited]
   [Thread 0x74104700 (LWP 11905) exited]
   [Thread 0x7fffd38f7700 (LWP 11918) exited]
   [New Thread 0x7fffd10f6700 (LWP 11936)]
   [New Thread 0x7fffd38f7700 (LWP 11937)]
   [New Thread 0x7fffd60f8700 (LWP 11938)]
   [Thread 0x7fffd10f6700 (LWP 11936) exited]
   [Thread 0x7fffd60f8700 (LWP 11938) exited]
   [Thread 0x7fffd38f7700 (LWP 11937) exited]
   [New Thread 0x7fffd38f7700 (LWP 11955)]
   [New Thread 0x7fffd60f8700 (LWP 11956)]
   [New Thread 0x7fffd10f6700 (LWP 11957)]
   
   Thread 1 "python" received signal SIGSEGV, Segmentation fault.
   0x7fffb5ba9

[GitHub] [incubator-tvm] areusch commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


areusch commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617491801


   I see this reliably with a virtualenv on bionic on AWS.
   
   environment:
   ami: 
`099720109477/ubuntu/images/hvm-ssd/ubuntu-bionic-18.04-amd64-server-20200408`
   
   python version
```$ python --version
   Python 3.7.5
   ```
   
   installed python packages:
   ```
   antlr4-python3-runtime==4.8
   Cython==0.29.16
   decorator==4.4.2
   psutil==5.7.0
   pylint==2.4.4
 - astroid [required: >=2.3.0,<2.4, installed: 2.3.3]
   - lazy-object-proxy [required: ==1.4.*, installed: 1.4.3]
   - six [required: ~=1.12, installed: 1.14.0]
   - typed-ast [required: >=1.4.0,<1.5, installed: 1.4.1]
   - wrapt [required: ==1.11.*, installed: 1.11.2]
 - isort [required: >=4.2.5,<5, installed: 4.3.21]
 - mccabe [required: >=0.6,<0.7, installed: 0.6.1]
   pytest==5.4.1
 - attrs [required: >=17.4.0, installed: 19.3.0]
 - importlib-metadata [required: >=0.12, installed: 1.6.0]
   - zipp [required: >=0.5, installed: 3.1.0]
 - more-itertools [required: >=4.0.0, installed: 8.2.0]
 - packaging [required: Any, installed: 20.3]
   - pyparsing [required: >=2.0.2, installed: 2.4.7]
   - six [required: Any, installed: 1.14.0]
 - pluggy [required: >=0.12,<1.0, installed: 0.13.1]
   - importlib-metadata [required: >=0.12, installed: 1.6.0]
 - zipp [required: >=0.5, installed: 3.1.0]
 - py [required: >=1.5.0, installed: 1.8.1]
 - wcwidth [required: Any, installed: 0.1.9]
   tornado==6.0.4
   xgboost==1.0.2
 - numpy [required: Any, installed: 1.18.3]
 - scipy [required: Any, installed: 1.4.1]
   - numpy [required: >=1.13.3, installed: 1.18.3]
   ```
   
   backtrace:
   ```
   ~/ws/tvm$ gdb python
   GNU gdb (Ubuntu 8.1-0ubuntu3.2) 8.1.0.20180409-git
   Copyright (C) 2018 Free Software Foundation, Inc.
   License GPLv3+: GNU GPL version 3 or later 
   This is free software: you are free to change and redistribute it.
   There is NO WARRANTY, to the extent permitted by law.  Type "show copying"
   and "show warranty" for details.
   This GDB was configured as "x86_64-linux-gnu".
   Type "show configuration" for configuration details.
   For bug reporting instructions, please see:
   .
   Find the GDB manual and other documentation resources online at:
   .
   For help, type "help".
   Type "apropos word" to search for commands related to "word"...
   Reading symbols from python...(no debugging symbols found)...done.
   (gdb) run tests/python/unittest/test_autotvm_xgboost_model.py
   Starting program: 
/home/ubuntu/.local/share/virtualenvs/tvm-FxJJpK7X/bin/python 
tests/python/unittest/test_autotvm_xgboost_model.py
   [Thread debugging using libthread_db enabled]
   Using host libthread_db library "/lib/x86_64-linux-gnu/libthread_db.so.1".
   [New Thread 0x74104700 (LWP 11905)]
   [New Thread 0x73903700 (LWP 11906)]
   [New Thread 0x7fffef102700 (LWP 11907)]
   [New Thread 0x7fffec901700 (LWP 11908)]
   [New Thread 0x7fffea100700 (LWP 11909)]
   [New Thread 0x7fffe78ff700 (LWP 11910)]
   [New Thread 0x7fffe50fe700 (LWP 11911)]
   [New Thread 0x7fffe28fd700 (LWP 11912)]
   [New Thread 0x7fffe00fc700 (LWP 11913)]
   [New Thread 0x7fffdd8fb700 (LWP 11914)]
   [New Thread 0x7fffdb0fa700 (LWP 11915)]
   [New Thread 0x7fffd88f9700 (LWP 11916)]
   [New Thread 0x7fffd60f8700 (LWP 11917)]
   [New Thread 0x7fffd38f7700 (LWP 11918)]
   [New Thread 0x7fffd10f6700 (LWP 11919)]
   [Thread 0x7fffe50fe700 (LWP 11911) exited]
   [Thread 0x7fffd88f9700 (LWP 11916) exited]
   [Thread 0x7fffd10f6700 (LWP 11919) exited]
   [Thread 0x7fffd60f8700 (LWP 11917) exited]
   [Thread 0x7fffdb0fa700 (LWP 11915) exited]
   [Thread 0x7fffdd8fb700 (LWP 11914) exited]
   [Thread 0x7fffe00fc700 (LWP 11913) exited]
   [Thread 0x7fffe28fd700 (LWP 11912) exited]
   [Thread 0x7fffe78ff700 (LWP 11910) exited]
   [Thread 0x7fffea100700 (LWP 11909) exited]
   [Thread 0x7fffec901700 (LWP 11908) exited]
   [Thread 0x7fffef102700 (LWP 11907) exited]
   [Thread 0x73903700 (LWP 11906) exited]
   [Thread 0x74104700 (LWP 11905) exited]
   [Thread 0x7fffd38f7700 (LWP 11918) exited]
   [New Thread 0x7fffd10f6700 (LWP 11936)]
   [New Thread 0x7fffd38f7700 (LWP 11937)]
   [New Thread 0x7fffd60f8700 (LWP 11938)]
   [Thread 0x7fffd10f6700 (LWP 11936) exited]
   [Thread 0x7fffd60f8700 (LWP 11938) exited]
   [Thread 0x7fffd38f7700 (LWP 11937) exited]
   [New Thread 0x7fffd38f7700 (LWP 11955)]
   [New Thread 0x7fffd60f8700 (LWP 11956)]
   [New Thread 0x7fffd10f6700 (LWP 11957)]
   
   Thread 1 "python" received signal SIGSEGV, Segmentation fault.
   0x7fffb5ba9e37 in std::vector, 
std::allocator > > 
xgboost::XGBoostParameter::UpdateAllowUnknown, std::allocator > > 
>(std::vector, 
std::allocator > > const&, bool*) ()
  from 
/home/ubuntu

[GitHub] [incubator-tvm] tqchen opened a new pull request #5400: [TIR] Enhance Substitute, python bindings for Substitute/PostOrderVisit

2020-04-21 Thread GitBox


tqchen opened a new pull request #5400:
URL: https://github.com/apache/incubator-tvm/pull/5400


   Substitute now takes a std::function to customize more replacing behaviors.
   
   Co-authored-by: Siyuan Feng 
   
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5400: [TIR] Enhance Substitute, python bindings for Substitute/PostOrderVisit

2020-04-21 Thread GitBox


tqchen commented on issue #5400:
URL: https://github.com/apache/incubator-tvm/pull/5400#issuecomment-617485403


   cc @Hzfengsy @yzhliu @ZihengJiang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


tqchen edited a comment on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617475038


   @leandron can you also provide a bit more details
   
   e.g. does directly run `tests/python/unittest/test_autotvm_xgboost_model.py` 
fails or do we need to run the entire unittest. It would also be nice if you 
can send a CI binary hashtag(perhaps in docker hub) to confirm the problematic 
issue.
   
   I tried to build a docker image with xgboost==1.0.2 and seems cannot repro 
the issue.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #4953: [CI][Docker] xgboost 1.0.1 causes segfault on test_autotvm_xgboost_model.py

2020-04-21 Thread GitBox


tqchen commented on issue #4953:
URL: https://github.com/apache/incubator-tvm/issues/4953#issuecomment-617475038


   @leandron can you also provide a bit more details
   
   e.g. does directly run `tests/python/unittest/test_autotvm_xgboost_model.py` 
fails or do we need to run the entire unittest. It would also be nice if you 
can send a CI binary hashtag(perhaps in docker hub) to confirm the problematic 
issue.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5399: [LLVM] Use ArrayRef in calls to CreateShuffleVector

2020-04-21 Thread GitBox


kparzysz-quic opened a new pull request #5399:
URL: https://github.com/apache/incubator-tvm/pull/5399


   This switch was made in LLVM 11. Previously this function was expecting mask 
indices of type `uint32_t`. This variant is now deprecated.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kparzysz-quic opened a new pull request #5398: [LLVM] Replace calls to Type::getVectorNumElements

2020-04-21 Thread GitBox


kparzysz-quic opened a new pull request #5398:
URL: https://github.com/apache/incubator-tvm/pull/5398


   This function has recently been removed from LLVM 11. Use alternative way to 
obtain vector element count (`VectorType::getNumElements`), which works for all 
LLVM versions.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hlu1 commented on issue #5381: [Runtime] Enable set_output in GraphRuntime

2020-04-21 Thread GitBox


hlu1 commented on issue #5381:
URL: https://github.com/apache/incubator-tvm/pull/5381#issuecomment-617442169


   I don't think you need to implement a non-copy version of set_output if you 
simply want to avoid copy. `GetOutput` returns a NDArray, which is essentially 
a shared_ptr to the underlying tensor:
   
   
https://github.com/apache/incubator-tvm/blob/master/src/runtime/graph/graph_runtime.cc#L156-L160



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5388: [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU

2020-04-21 Thread GitBox


tqchen edited a comment on issue #5388:
URL: https://github.com/apache/incubator-tvm/issues/5388#issuecomment-617430228


   CUDA's runtime API cudaFree implicitly syncs the default stream



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5388: [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU

2020-04-21 Thread GitBox


tqchen commented on issue #5388:
URL: https://github.com/apache/incubator-tvm/issues/5388#issuecomment-617430228


   CUDA's cudaFree implicitly syncs the default stream



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] samwyi edited a comment on issue #5388: [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU

2020-04-21 Thread GitBox


samwyi edited a comment on issue #5388:
URL: https://github.com/apache/incubator-tvm/issues/5388#issuecomment-617424509


   > Thanks @samwyi for reporting the problem. Perhaps an easier fix now would 
be call into Synchronize directly in when FreeDataSpace is called. This is also 
the default behavior in CUDA. Given that most of the space won't be freed 
during computation, this might be a fine fix.
   > 
   > @samwyi do you mind to send a PR?
   
   I'd be glad to provide a fix, but seems CUDA didn't call Synchronize in 
FreeDataSpace. Seems CUDA doesn't use deferred kernels, so it should be fine 
for CUDA.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] samwyi commented on issue #5388: [RUNTIME][VULKAN] vkBuffer released before memory copy command send to GPU

2020-04-21 Thread GitBox


samwyi commented on issue #5388:
URL: https://github.com/apache/incubator-tvm/issues/5388#issuecomment-617424509


   > Thanks @samwyi for reporting the problem. Perhaps an easier fix now would 
be call into Synchronize directly in when FreeDataSpace is called. This is also 
the default behavior in CUDA. Given that most of the space won't be freed 
during computation, this might be a fine fix.
   > 
   > @samwyi do you mind to send a PR?
   
   I'd be glad to provide a fix, but seems CUDA didn't call Synchronize in 
FreeDataSpace. Seems CUDA doesn't use deferred kernels, so it should be fine.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [PTYTHON] Migrate VTA TIR passes to the new pass manager. (#5397)

2020-04-21 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new d327787  [PTYTHON] Migrate VTA TIR passes to the new pass manager. 
(#5397)
d327787 is described below

commit d3277874a24e775d2476b0eb0ad89f3a46964a14
Author: Tianqi Chen 
AuthorDate: Tue Apr 21 14:23:18 2020 -0700

[PTYTHON] Migrate VTA TIR passes to the new pass manager. (#5397)
---
 include/tvm/target/target.h|   5 +-
 python/tvm/autotvm/measure/measure_methods.py  |   8 +-
 python/tvm/driver/build_module.py  |  29 +-
 python/tvm/tir/function.py |  16 +
 src/target/target.cc   |   4 +-
 tests/python/relay/test_pass_fold_constant.py  |   8 +-
 tests/python/unittest/test_target_codegen_cuda.py  |  10 +-
 tests/python/unittest/test_target_codegen_llvm.py  |  11 +-
 .../unittest/test_tir_pass_verify_gpu_code.py  |   8 +-
 tutorials/dev/low_level_custom_pass.py |  11 +-
 vta/python/vta/build_module.py |  56 +-
 vta/python/vta/ir_pass.py  | 995 -
 vta/python/vta/transform.py| 962 
 13 files changed, 1050 insertions(+), 1073 deletions(-)

diff --git a/include/tvm/target/target.h b/include/tvm/target/target.h
index 59aa955..829de73 100644
--- a/include/tvm/target/target.h
+++ b/include/tvm/target/target.h
@@ -27,6 +27,7 @@
 #include 
 #include 
 #include 
+#include 
 
 #include 
 #include 
@@ -225,8 +226,8 @@ class BuildConfigNode : public Object {
   /*! \brief Whether to partition const loop */
   bool partition_const_loop = false;
 
-  /*! \brief Whether to dump the IR of each pass (only when building from 
python) */
-  std::vector< std::pair > add_lower_pass;
+  /*! \brief List of passes to be injected into the low-level pipeline. */
+  std::vector> add_lower_pass;
 
   /*! \brief Whether to dump the IR of each pass (only when building from 
python) */
   bool dump_pass_ir = false;
diff --git a/python/tvm/autotvm/measure/measure_methods.py 
b/python/tvm/autotvm/measure/measure_methods.py
index 698ddbc..5ddc5df 100644
--- a/python/tvm/autotvm/measure/measure_methods.py
+++ b/python/tvm/autotvm/measure/measure_methods.py
@@ -615,9 +615,9 @@ def gpu_verify_pass(**kwargs):
 """Verify the validity of a gpu kernel.
 This pass will check memory usage and number of threads per block.
 """
-def verify_pass(stmt):
-valid = ir_pass.VerifyGPUCode(stmt, kwargs)
+def verify_pass(f, *_):
+valid = ir_pass.VerifyGPUCode(f.body, kwargs)
 if not valid:
 raise InstantiationError("Skipped because of invalid gpu kernel")
-return stmt
-return verify_pass
+return f
+return tvm.tir.transform.prim_func_pass(verify_pass, opt_level=0)
diff --git a/python/tvm/driver/build_module.py 
b/python/tvm/driver/build_module.py
index 35700ba..dcd6d44 100644
--- a/python/tvm/driver/build_module.py
+++ b/python/tvm/driver/build_module.py
@@ -123,25 +123,6 @@ def form_irmodule(sch, args, name, binds):
 return tvm.IRModule({name: func})
 
 
-def _wrap_as_prim_func_pass(flist, name):
-"""Wrap flist as a function pass.
-
-This is an temporary adapter before we fully
-migrate to the new pass manager.
-"""
-def _transform(func, *_):
-stmt = func.body
-for f in flist:
-stmt = f(stmt)
-# create a new function with updated body.
-return tvm.tir.PrimFunc(func.params,
-stmt,
-func.ret_type,
-func.buffer_map,
-func.attrs)
-return tvm.tir.transform.prim_func_pass(_transform, opt_level=0, name=name)
-
-
 def lower(sch,
   args,
   name="main",
@@ -190,15 +171,15 @@ def lower(sch,
 else:
 mod = sch
 
+pass_list = lower_phase0
 # Phase 1
-pass_list = [
-_wrap_as_prim_func_pass(lower_phase0, "Custom-Phase0"),
+pass_list += [
 tvm.tir.transform.InjectPrefetch(),
 tvm.tir.transform.StorageFlatten(64, cfg.instrument_bound_checkers),
 tvm.tir.transform.NarrowDataType(32),
 tvm.tir.transform.Simplify(),
-_wrap_as_prim_func_pass(lower_phase1, "Custom-Phase1"),
 ]
+pass_list += lower_phase1
 
 # Phase 2
 if not simple_mode:
@@ -214,8 +195,8 @@ def lower(sch,
 cfg.auto_unroll_max_depth,
 cfg.auto_unroll_max_extent,
 cfg.unroll_explicit),
-_wrap_as_prim_func_pass(lower_phase2, "Custom-Phase2"),
 ]
+pass_list += lower_phase2
 
 # Phase 3
 pass_list += [
@@ -225,7 +206,7 @@ def lower(sch,
 
 if not cfg.disable_select_rewriting:
 pass_list += [tv

[GitHub] [incubator-tvm] cbalint13 commented on a change in pull request #5395: [RELAY][PYTORCH]cosh,sinh,log2,log10,log1p op support

2020-04-21 Thread GitBox


cbalint13 commented on a change in pull request #5395:
URL: https://github.com/apache/incubator-tvm/pull/5395#discussion_r412425014



##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -61,6 +61,24 @@ def log_grad(orig, grad):
 return [grad * ones_like(x) / x]
 
 
+@register_gradient("log2")
+def log2_grad(orig, grad):
+"""Returns [grad * 1 / (log(2) * x)]"""
+x = orig.args[0]
+ones = ones_like(x)
+two = const(2.0)
+return [grad * ones / (log(two) * x)]
+
+
+@register_gradient("log10")
+def log10_grad(orig, grad):
+"""Returns [grad * 1 / (log(10) * x)]"""
+x = orig.args[0]
+ones = ones_like(x)
+ten = const(2.0)

Review comment:
   should be const(10.0)





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5397: [TIR] Migrate VTA TIR passes to the new pass manager.

2020-04-21 Thread GitBox


tqchen edited a comment on issue #5397:
URL: https://github.com/apache/incubator-tvm/pull/5397#issuecomment-617351149


   cc @tmoreau89 @ZihengJiang  @zhiics 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5397: [TIR] Migrate VTA TIR passes to the new pass manager.

2020-04-21 Thread GitBox


tqchen commented on issue #5397:
URL: https://github.com/apache/incubator-tvm/pull/5397#issuecomment-617351149


   cc @tmoreau89 @ZihengJiang  



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #5397: [PTYTHON] Migrate VTA TIR passes to the new pass manager.

2020-04-21 Thread GitBox


tqchen opened a new pull request #5397:
URL: https://github.com/apache/incubator-tvm/pull/5397


   The new pass returns every pass as transform.Pass which allows 
IRModule->IRModule transform.
   
   cc @tmoreau89 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


tqchen edited a comment on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617340606


   OK, let us wait for the mxnet-mkl then, currently blocked by 
https://github.com/apache/incubator-tvm/issues/5396 hopefully we can land this 
week



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


tqchen commented on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617340606


   OK, let us wait for the mxnet-mkl then, currently blocked by 
https://github.com/apache/incubator-tvm/issues/5396



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new issue #5396: [DOCS] Sphinx docs warning fixes

2020-04-21 Thread GitBox


tqchen opened a new issue #5396:
URL: https://github.com/apache/incubator-tvm/issues/5396


   This issue happens when the docker update status for ci-gpu to the 
lastest(the main change is is mkl-ml). Given these warnings, perhaps we need to 
migrate 
   
   Currently block on sphinx pre-check warnings  due to the version upgrade 
http://ci.tvm.ai:8080/job/temp-ci-docker-staging/job/ci-stage/29/execution/node/205/log/
 need to fix them before the we upgrade the image.
   
   Actions to be taken
   - [ ] Migrate the markdown to rst
   - cc @tmoreau89 for docs/vta/install.md
   - cc @kazum for docs/deploy/aws_fpga.md
   - [ ] fix the remaining warnings
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5395: [RELAY][PYTORCH]cosh,sinh,log2,log10,log1p op support

2020-04-21 Thread GitBox


siju-samuel opened a new pull request #5395:
URL: https://github.com/apache/incubator-tvm/pull/5395


   - cosh
   - sinh
   - log2
   - log10
   - log1p
The ops are supported from relay and pytorch frontend.
   
   @masahi please help me to review this PR. Thanks in advance.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5394: [TFLITE]Quantize & Dequantize op

2020-04-21 Thread GitBox


anijain2305 edited a comment on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-617284954


   LGTM.
   
   @inadob  Is there any way to add a test? Do you have any suggestions here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5394: [TFLITE]Quantize & Dequantize op

2020-04-21 Thread GitBox


anijain2305 commented on issue #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394#issuecomment-617284954


   LGTM. Is there any way to add a test? 
   
   @inadob Do you have any suggestions here?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 edited a comment on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


anijain2305 edited a comment on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617279905


   @masahi @tqchen The MXNet quantized models has operators that can only work 
with MKLDNN. Example is as follows. Note the `"op": "_sg_mkldnn_conv",`
   
   ~~~
   {
 "op": "_sg_mkldnn_conv",
 "name": "quantized_sg_mkldnn_conv_bn_act_18",
 "attrs": {
   "max_calib_range": "2.660447",
   "min_calib_range": "0.00",
   "quantized": "true",
   "with_act": "true",
   "with_bn": "true"
 },
 "inputs": [[110, 0, 0], [111, 0, 0], [112, 0, 0], [113, 0, 0], [114, 
0, 1], [115, 0, 1], [110, 1, 0], [110, 2, 0]],
 "subgraphs": [
   {
 "nodes": [
   {
 "op": "null",
 "name": "sg_mkldnn_conv_bn_add_act_15_output0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_conv3_weight0",
 "inputs": []
   },
   {
 "op": "Convolution",
 "name": "resnetv10_stage4_conv3_fwd",
 "attrs": {
   "dilate": "(1, 1)",
   "kernel": "(3, 3)",
   "layout": "NCHW",
   "no_bias": "True",
   "num_filter": "512",
   "num_group": "1",
   "pad": "(1, 1)",
   "stride": "(1, 1)"
 },
 "inputs": [[0, 0, 0], [1, 0, 0]]
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_gamma0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_beta0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_running_mean0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_running_var0",
 "inputs": []
   },
   {
 "op": "BatchNorm",
 "name": "resnetv10_stage4_batchnorm3_fwd",
 "attrs": {
   "axis": "1",
   "eps": "1e-05",
   "fix_gamma": "False",
   "momentum": "0.9",
   "use_global_stats": "False"
 },
 "inputs": [[2, 0, 0], [3, 0, 0], [4, 0, 0], [5, 0, 0], [6, 0, 
0]]
   },
   {
 "op": "Activation",
 "name": "resnetv10_stage4_relu1_fwd",
 "attrs": {"act_type": "relu"},
 "inputs": [[7, 0, 0]]
   }
 ],
 "arg_nodes": [0, 1, 3, 4, 5, 6],
 "node_row_ptr": [0, 1, 2, 3, 4, 5, 6, 7, 10, 11],
 "heads": [[8, 0, 0]]
   }
 ]
   },
   ~~~
   
   Even if I quantize the model outside of this tutorial. I would still need 
`mxnet-mkl` to read the MXNet quantized model in the Relay parser.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] anijain2305 commented on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


anijain2305 commented on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617279905


   @masahi @tqchen The MXNet quantized models has operators that can only work 
with MKLDNN. Example is as follows. Note the `"op": "_sg_mkldnn_conv",`
   
   ~~~
   {
 "op": "_sg_mkldnn_conv",
 "name": "quantized_sg_mkldnn_conv_bn_act_18",
 "attrs": {
   "max_calib_range": "2.660447",
   "min_calib_range": "0.00",
   "quantized": "true",
   "with_act": "true",
   "with_bn": "true"
 },
 "inputs": [[110, 0, 0], [111, 0, 0], [112, 0, 0], [113, 0, 0], [114, 
0, 1], [115, 0, 1], [110, 1, 0], [110, 2, 0]],
 "subgraphs": [
   {
 "nodes": [
   {
 "op": "null",
 "name": "sg_mkldnn_conv_bn_add_act_15_output0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_conv3_weight0",
 "inputs": []
   },
   {
 "op": "Convolution",
 "name": "resnetv10_stage4_conv3_fwd",
 "attrs": {
   "dilate": "(1, 1)",
   "kernel": "(3, 3)",
   "layout": "NCHW",
   "no_bias": "True",
   "num_filter": "512",
   "num_group": "1",
   "pad": "(1, 1)",
   "stride": "(1, 1)"
 },
 "inputs": [[0, 0, 0], [1, 0, 0]]
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_gamma0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_beta0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_running_mean0",
 "inputs": []
   },
   {
 "op": "null",
 "name": "resnetv10_stage4_batchnorm3_running_var0",
 "inputs": []
   },
   {
 "op": "BatchNorm",
 "name": "resnetv10_stage4_batchnorm3_fwd",
 "attrs": {
   "axis": "1",
   "eps": "1e-05",
   "fix_gamma": "False",
   "momentum": "0.9",
   "use_global_stats": "False"
 },
 "inputs": [[2, 0, 0], [3, 0, 0], [4, 0, 0], [5, 0, 0], [6, 0, 
0]]
   },
   {
 "op": "Activation",
 "name": "resnetv10_stage4_relu1_fwd",
 "attrs": {"act_type": "relu"},
 "inputs": [[7, 0, 0]]
   }
 ],
 "arg_nodes": [0, 1, 3, 4, 5, 6],
 "node_row_ptr": [0, 1, 2, 3, 4, 5, 6, 7, 10, 11],
 "heads": [[8, 0, 0]]
   }
 ]
   },
   ~~~
   
   Even if I quantize the model outside of this tutorial. I would still need 
`mxnet-mkl` to read the MXNet quantized model in the Relay parser.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-04-21 Thread GitBox


siju-samuel commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-617272162


   @gopinath-r can you please share the script  to run from bts github or 
scripted_model? 
   In `repeat`, can u please print both `inputs, input_types` and share the 
result.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on issue #5215: [AutoTVM] AutoTVM incorrect measurement

2020-04-21 Thread GitBox


FrozenGene commented on issue #5215:
URL: https://github.com/apache/incubator-tvm/issues/5215#issuecomment-617256430


   > @FrozenGene what is the status of this issue?
   
   This issue wants to track all template of using `debug_skip_region`(like arm 
/ cuda and so on). Currently, we only fix intel conv2d. I think I should list 
the target we should fix.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


tqchen commented on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617234125


   We can just use the floating pt model for the reference pt



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5215: [AutoTVM] AutoTVM incorrect measurement

2020-04-21 Thread GitBox


tqchen commented on issue #5215:
URL: https://github.com/apache/incubator-tvm/issues/5215#issuecomment-617232476


   @FrozenGene what is the status of this issue?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on issue #5373: [REFACTOR][TIR] Migrate HoistIfThenElse to the unified pass manager

2020-04-21 Thread GitBox


tqchen edited a comment on issue #5373:
URL: https://github.com/apache/incubator-tvm/issues/5373#issuecomment-617230692


   Thanks @kevinthesun , it should take about an hour



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5373: [REFACTOR][TIR] Migrate HoistIfThenElse to the unified pass manager

2020-04-21 Thread GitBox


tqchen commented on issue #5373:
URL: https://github.com/apache/incubator-tvm/issues/5373#issuecomment-617230692


   Thanks @kevinthesun , it should take about a few hours



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] adobay commented on a change in pull request #5289: add tensorflow cumsum

2020-04-21 Thread GitBox


adobay commented on a change in pull request #5289:
URL: https://github.com/apache/incubator-tvm/pull/5289#discussion_r412253968



##
File path: topi/include/topi/transform.h
##
@@ -1105,6 +1105,88 @@ inline tvm::te::Tensor matmul(const tvm::te::Tensor& A,
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/**
+ * Compute the cumulative sum of the tensor `A` along `axis`.
+ *
+ *  By default, this operation performs an inclusive cumsum, which means that 
the first
+ *  element of the input is identical to the first element of the output:
+ *
+ *  ```python
+ *  cumsum([a, b, c])  # [a, a + b, a + b + c]
+ *  ```
+ *
+ *  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is 
performed
+ *  instead:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True)  # [0, a, a + b]
+ *  ```
+ *
+ *  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
+ *  opposite direction:
+ *
+ *  ```python
+ *  cumsum([a, b, c], reverse=True)  # [a + b + c, b + c, c]
+ *  ```
+ *
+ *  The `reverse` and `exclusive` kwargs can also be combined:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True, reverse=True)  # [b + c, c, 0]
+ *  ```
+ *
+ * @param A Input tensor
+ * @param axis  Must be in the range `[-rank(x), rank(x))`
+ * @param exclusive Perform exclusive cumsum
+ * @param reverse   Performed in the opposite direction
+ * @param name  The name of the operation
+ * @param tag   The tag to mark the operation
+ * @return  A Tensor whose op member is the cumsum operation
+ */
+inline tvm::te::Tensor cumsum(const tvm::te::Tensor& A,
+  int axis,
+  bool exclusive = false,
+  bool reverse = false,
+  std::string name = "T_cumsum",
+  std::string tag = kCumsum) {
+int totalSize = static_cast(A->shape.size());
+if (axis < 0) {
+axis = totalSize + axis;
+}
+auto maxLength = A->shape[axis];
+auto l = [&](const Array& input_indices) {

Review comment:
   @tqchen 
   I implemented cumsum using scan, which has also been submitted, but I got an 
error when running the test case. Could you please help me see this error 
```AttributeError:  has no attribute axis```?
   
   ```
   Traceback (most recent call last):
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 3359, in 

   test_forward_cumsum()
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1373, in 
test_forward_cumsum
   _cumsum((4, ), 0, exclusive, reverse)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1367, in 
_cumsum
   compare_tf_with_tvm([np_data], ['in_data:0'], 'cumsum:0')
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 192, in 
compare_tf_with_tvm
   cuda_layout=cuda_layout)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 127, in 
run_tvm_graph
   graph, lib, params = relay.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
251, in build
   graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
120, in build
   self._build(mod, target, target_host)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py", 
line 216, in __call__
   raise get_last_ffi_error()
   
   AttributeError: Traceback (most recent call last):
 [bt] (8) 9   libtvm.dylib0x000128d92dc8 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>::InitVTable()::'lambda4'(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*)::__invoke(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*) + 24
 [bt] (7) 8   libtvm.dylib0x000128d90048 
tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode 
const*) + 3752
 [bt] (6) 7   libtvm.dylib0x000128d84d74 
std::__1::__function::__func::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*), std::__1::allocator::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 356
 [bt] (5) 6   libtvm.dylib0x000128d70d62 
tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&) + 18
 [bt] (4) 5   libtvm.dylib0x000128d73016 
tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&) + 
1382
 [bt] (3) 4   libtvm.dylib

[GitHub] [incubator-tvm] adobay commented on a change in pull request #5289: add tensorflow cumsum

2020-04-21 Thread GitBox


adobay commented on a change in pull request #5289:
URL: https://github.com/apache/incubator-tvm/pull/5289#discussion_r412253968



##
File path: topi/include/topi/transform.h
##
@@ -1105,6 +1105,88 @@ inline tvm::te::Tensor matmul(const tvm::te::Tensor& A,
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/**
+ * Compute the cumulative sum of the tensor `A` along `axis`.
+ *
+ *  By default, this operation performs an inclusive cumsum, which means that 
the first
+ *  element of the input is identical to the first element of the output:
+ *
+ *  ```python
+ *  cumsum([a, b, c])  # [a, a + b, a + b + c]
+ *  ```
+ *
+ *  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is 
performed
+ *  instead:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True)  # [0, a, a + b]
+ *  ```
+ *
+ *  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
+ *  opposite direction:
+ *
+ *  ```python
+ *  cumsum([a, b, c], reverse=True)  # [a + b + c, b + c, c]
+ *  ```
+ *
+ *  The `reverse` and `exclusive` kwargs can also be combined:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True, reverse=True)  # [b + c, c, 0]
+ *  ```
+ *
+ * @param A Input tensor
+ * @param axis  Must be in the range `[-rank(x), rank(x))`
+ * @param exclusive Perform exclusive cumsum
+ * @param reverse   Performed in the opposite direction
+ * @param name  The name of the operation
+ * @param tag   The tag to mark the operation
+ * @return  A Tensor whose op member is the cumsum operation
+ */
+inline tvm::te::Tensor cumsum(const tvm::te::Tensor& A,
+  int axis,
+  bool exclusive = false,
+  bool reverse = false,
+  std::string name = "T_cumsum",
+  std::string tag = kCumsum) {
+int totalSize = static_cast(A->shape.size());
+if (axis < 0) {
+axis = totalSize + axis;
+}
+auto maxLength = A->shape[axis];
+auto l = [&](const Array& input_indices) {

Review comment:
   @tqchen 
   I implemented cumsum using scan and sumited, but I got an error when running 
the test case. Could you please help me see this error ```AttributeError: 
 has no attribute axis```?
   
   ```
   Traceback (most recent call last):
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 3359, in 

   test_forward_cumsum()
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1373, in 
test_forward_cumsum
   _cumsum((4, ), 0, exclusive, reverse)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1367, in 
_cumsum
   compare_tf_with_tvm([np_data], ['in_data:0'], 'cumsum:0')
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 192, in 
compare_tf_with_tvm
   cuda_layout=cuda_layout)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 127, in 
run_tvm_graph
   graph, lib, params = relay.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
251, in build
   graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
120, in build
   self._build(mod, target, target_host)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py", 
line 216, in __call__
   raise get_last_ffi_error()
   
   AttributeError: Traceback (most recent call last):
 [bt] (8) 9   libtvm.dylib0x000128d92dc8 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>::InitVTable()::'lambda4'(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*)::__invoke(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*) + 24
 [bt] (7) 8   libtvm.dylib0x000128d90048 
tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode 
const*) + 3752
 [bt] (6) 7   libtvm.dylib0x000128d84d74 
std::__1::__function::__func::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*), std::__1::allocator::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 356
 [bt] (5) 6   libtvm.dylib0x000128d70d62 
tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&) + 18
 [bt] (4) 5   libtvm.dylib0x000128d73016 
tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&) + 
1382
 [bt] (3) 4   libtvm.dylib0x00012

[GitHub] [incubator-tvm] adobay commented on a change in pull request #5289: add tensorflow cumsum

2020-04-21 Thread GitBox


adobay commented on a change in pull request #5289:
URL: https://github.com/apache/incubator-tvm/pull/5289#discussion_r412253968



##
File path: topi/include/topi/transform.h
##
@@ -1105,6 +1105,88 @@ inline tvm::te::Tensor matmul(const tvm::te::Tensor& A,
   return tvm::te::compute(output_shape, l, name, tag);
 }
 
+/**
+ * Compute the cumulative sum of the tensor `A` along `axis`.
+ *
+ *  By default, this operation performs an inclusive cumsum, which means that 
the first
+ *  element of the input is identical to the first element of the output:
+ *
+ *  ```python
+ *  cumsum([a, b, c])  # [a, a + b, a + b + c]
+ *  ```
+ *
+ *  By setting the `exclusive` kwarg to `True`, an exclusive cumsum is 
performed
+ *  instead:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True)  # [0, a, a + b]
+ *  ```
+ *
+ *  By setting the `reverse` kwarg to `True`, the cumsum is performed in the
+ *  opposite direction:
+ *
+ *  ```python
+ *  cumsum([a, b, c], reverse=True)  # [a + b + c, b + c, c]
+ *  ```
+ *
+ *  The `reverse` and `exclusive` kwargs can also be combined:
+ *
+ *  ```python
+ *  cumsum([a, b, c], exclusive=True, reverse=True)  # [b + c, c, 0]
+ *  ```
+ *
+ * @param A Input tensor
+ * @param axis  Must be in the range `[-rank(x), rank(x))`
+ * @param exclusive Perform exclusive cumsum
+ * @param reverse   Performed in the opposite direction
+ * @param name  The name of the operation
+ * @param tag   The tag to mark the operation
+ * @return  A Tensor whose op member is the cumsum operation
+ */
+inline tvm::te::Tensor cumsum(const tvm::te::Tensor& A,
+  int axis,
+  bool exclusive = false,
+  bool reverse = false,
+  std::string name = "T_cumsum",
+  std::string tag = kCumsum) {
+int totalSize = static_cast(A->shape.size());
+if (axis < 0) {
+axis = totalSize + axis;
+}
+auto maxLength = A->shape[axis];
+auto l = [&](const Array& input_indices) {

Review comment:
   @tqchen 
   I implemented cumsum using scan, but I got an error when running a the test 
case. Could you please help me see this error ```AttributeError:  has no attribute axis```?
   
   ```
   Traceback (most recent call last):
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 3359, in 

   test_forward_cumsum()
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1373, in 
test_forward_cumsum
   _cumsum((4, ), 0, exclusive, reverse)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 1367, in 
_cumsum
   compare_tf_with_tvm([np_data], ['in_data:0'], 'cumsum:0')
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 192, in 
compare_tf_with_tvm
   cuda_layout=cuda_layout)
   
 File "tests/python/frontend/tensorflow/test_forward.py", line 127, in 
run_tvm_graph
   graph, lib, params = relay.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
251, in build
   graph_json, mod, params = bld_mod.build(mod, target, target_host, params)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/relay/build_module.py", line 
120, in build
   self._build(mod, target, target_host)
   
 File 
"/Users/adobay/Documents/code/tvm/tvm/python/tvm/_ffi/_ctypes/packed_func.py", 
line 216, in __call__
   raise get_last_ffi_error()
   
   AttributeError: Traceback (most recent call last):
 [bt] (8) 9   libtvm.dylib0x000128d92dc8 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>::InitVTable()::'lambda4'(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*)::__invoke(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor > (tvm::RelayExpr 
const&)>*) + 24
 [bt] (7) 8   libtvm.dylib0x000128d90048 
tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode 
const*) + 3752
 [bt] (6) 7   libtvm.dylib0x000128d84d74 
std::__1::__function::__func::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*), std::__1::allocator::AssignTypedLambda(tvm::relay::$_8)::'lambda'(tvm::runtime::TVMArgs
 const&, tvm::runtime::TVMRetValue*)>, void (tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 356
 [bt] (5) 6   libtvm.dylib0x000128d70d62 
tvm::relay::CompileEngineImpl::Lower(tvm::relay::CCacheKey const&) + 18
 [bt] (4) 5   libtvm.dylib0x000128d73016 
tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey const&) + 
1382
 [bt] (3) 4   libtvm.dylib0x000128d743d6 
tv

[incubator-tvm] branch master updated: Tf2 test fixups (#5391)

2020-04-21 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 72f2aea  Tf2 test fixups (#5391)
72f2aea is described below

commit 72f2aea2dd219bf55c15b3cf4cfc21491f1f60dd
Author: Ramana Radhakrishnan 
AuthorDate: Tue Apr 21 15:49:41 2020 +0100

Tf2 test fixups (#5391)

* Fix oversight in importing tf.compat.v1 as tf.

* Actually disable test for lstm in TF2.1

Since the testing framework actually uses pytest, the version
check needs to be moved.
---
 tests/python/frontend/tensorflow/test_bn_dynamic.py | 5 -
 tests/python/frontend/tensorflow/test_forward.py| 8 
 2 files changed, 8 insertions(+), 5 deletions(-)

diff --git a/tests/python/frontend/tensorflow/test_bn_dynamic.py 
b/tests/python/frontend/tensorflow/test_bn_dynamic.py
index 4be838e..a2d6903 100644
--- a/tests/python/frontend/tensorflow/test_bn_dynamic.py
+++ b/tests/python/frontend/tensorflow/test_bn_dynamic.py
@@ -22,7 +22,10 @@ in TensorFlow frontend when mean and variance are not given.
 """
 import tvm
 import numpy as np
-import tensorflow as tf
+try:
+import tensorflow.compat.v1 as tf
+except ImportError:
+import tensorflow as tf
 from tvm import relay
 from tensorflow.python.framework import graph_util
 
diff --git a/tests/python/frontend/tensorflow/test_forward.py 
b/tests/python/frontend/tensorflow/test_forward.py
index bc884bb..93501f1 100644
--- a/tests/python/frontend/tensorflow/test_forward.py
+++ b/tests/python/frontend/tensorflow/test_forward.py
@@ -1901,7 +1901,9 @@ def _test_lstm_cell(batch_size, num_hidden, num_layers, 
forget_bias, dtype):
 
 def test_forward_lstm():
 '''test LSTM block cell'''
-_test_lstm_cell(1, 2, 1, 0.5, 'float32')
+if package_version.parse(tf.VERSION) < package_version.parse('2.0.0'):
+#in 2.0, tf.contrib.rnn.LSTMBlockCell is removed
+_test_lstm_cell(1, 2, 1, 0.5, 'float32')
 
 
 ###
@@ -3308,9 +3310,7 @@ if __name__ == '__main__':
 test_forward_ptb()
 
 # RNN
-if package_version.parse(tf.VERSION) < package_version.parse('2.0.0'):
-#in 2.0, tf.contrib.rnn.LSTMBlockCell is removed
-test_forward_lstm()
+test_forward_lstm()
 
 # Elementwise
 test_forward_ceil()



[GitHub] [incubator-tvm] tqchen commented on issue #5391: Tf2 test fixups

2020-04-21 Thread GitBox


tqchen commented on issue #5391:
URL: https://github.com/apache/incubator-tvm/pull/5391#issuecomment-617228512


   THanks @u99127 @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5189: Python unit test test_tuple_type crashes

2020-04-21 Thread GitBox


tqchen commented on issue #5189:
URL: https://github.com/apache/incubator-tvm/issues/5189#issuecomment-617228247


   https://github.com/apache/incubator-tvm/pull/5390



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on issue #5390: Fix test_ir_type.

2020-04-21 Thread GitBox


tqchen commented on issue #5390:
URL: https://github.com/apache/incubator-tvm/pull/5390#issuecomment-617228164


   Thanks @areusch !



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: Fix test_ir_type. (#5390)

2020-04-21 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new be54c98  Fix test_ir_type. (#5390)
be54c98 is described below

commit be54c9844a7dc96a38b19cda50887dade9863a6a
Author: Andrew Reusch 
AuthorDate: Tue Apr 21 07:41:52 2020 -0700

Fix test_ir_type. (#5390)

* The void return type is not None/nullptr, it's VoidType or
   TupleType([]).
---
 tests/python/unittest/test_ir_type.py | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tests/python/unittest/test_ir_type.py 
b/tests/python/unittest/test_ir_type.py
index a0e7d2b..1072efb 100644
--- a/tests/python/unittest/test_ir_type.py
+++ b/tests/python/unittest/test_ir_type.py
@@ -72,7 +72,7 @@ def test_func_type():
 
 def test_tuple_type():
 tp = tvm.ir.TypeVar('tp', tvm.ir.TypeKind.Type)
-tf = tvm.ir.FuncType([], None, [], [])
+tf = tvm.ir.FuncType([], tvm.ir.TupleType([]), [], [])
 tt = tvm.ir.TensorType(tvm.runtime.convert([1, 2, 3]), 'float32')
 fields = tvm.runtime.convert([tp, tf, tt])
 



[GitHub] [incubator-tvm] gopinath-r commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-04-21 Thread GitBox


gopinath-r commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-617160891


   @masahi I am trying to use tvm for the following model 
   [https://github.com/cogaplex-bts/bts/blob/master/pytorch/bts.py]
   
   I am getting the following error while building the relay IR from pytorch 
scripted model
   **TypeError: Don't know how to handle type **
   
   After debugging i found that error is thrown at parsing the operation 
torch.repeat. 
   and torch.repeat is mapped as aten::repeat in 
_tvm/python/tvm/relay/frontend/pytorch.py_
   
   torch.repeat operates on tensor but while parsing torch.repeat it is unable 
to handle the data type
   
   following is the complete error trace
   
   ` in 
 1 shape_list = [('input0', (1,3,1216,352))]
 2 mod, params = relay.frontend.from_pytorch(scripted_model,
   > 3   shape_list)
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/relay/frontend/pytorch.py
 in from_pytorch(script_module, input_shapes, custom_convert_map)
  2255 
  2256 ret = convert_operators(_get_operator_nodes(graph.nodes()),
   -> 2257 outputs, ret_name, convert_map, prelude)
  2258 
  2259 mod["main"] = tvm.relay.Function(_analysis.free_vars(ret[0]), 
ret[0])
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/relay/frontend/pytorch.py
 in convert_operators(operators, outputs, ret_names, convert_map, prelude)
  2169 relay_op = convert_map[operator]
  2170 # print("relay_op",relay_op)
   -> 2171 relay_out = relay_op(inputs, _get_input_types(op_node))
  2172 
  2173 if isinstance(relay_out, tuple):
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/relay/frontend/pytorch.py
 in _impl(inputs, input_types)
   322 # print("data",data)
   323 # print("reps",reps)
   --> 324 return _op.transform.tile(data, reps=reps)
   325 return _impl
   326 
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/relay/op/transform.py
 in tile(data, reps)
   446 """
   447 
   --> 448 return _make.tile(data, reps)
   449 
   450 
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/_ffi/_ctypes/packed_func.py
 in __call__(self, *args)
   209 """
   210 temp_args = []
   --> 211 values, tcodes, num_args = _make_tvm_args(args, temp_args)
   212 ret_val = TVMValue()
   213 ret_tcode = ctypes.c_int()
   
   
~/Documents/Projects/Kyocera_depth_estimation/optimization/tvm/python/tvm/_ffi/_ctypes/packed_func.py
 in _make_tvm_args(args, temp_args)
   175 else:
   176 print(arg)
   --> 177 raise TypeError("Don't know how to handle type %s" % 
type(arg))
   178 return values, type_codes, num_args
   179 
   
   TypeError: Don't know how to handle type `
   
   If you know what is the reason and how to resolve this it would be really 
helpful
   
   Thanks in advance



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5394: [TFLITE]Quantize & Dequantize op

2020-04-21 Thread GitBox


siju-samuel opened a new pull request #5394:
URL: https://github.com/apache/incubator-tvm/pull/5394


   @anijain2305 @FrozenGene  Please help to review.
   
   I havent added any testcase beacause Im not able to simulate this from 
testcases as there are no such ops in tf. This tflite ops will be added to 
network when quantizing models with and inputs are float. And couldnt find any 
publically available model.
   
   eg:
   
![image](https://user-images.githubusercontent.com/15828974/79867851-9d2a1700-83fc-11ea-855e-8d6a72e6a342.png)
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412128043



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   ok. i will fix this and do the change accordingly. thanks.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


masahi edited a comment on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617133013


   Can we skip calibration (uncomment to encourage testing locally) and still 
get meaningful output? I think it is unlikely we can update our CI for the sake 
of this tutorial.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on issue #5362: [Tutorial - QNN] Prequantized MXNet model compilation.

2020-04-21 Thread GitBox


masahi commented on issue #5362:
URL: https://github.com/apache/incubator-tvm/pull/5362#issuecomment-617133013


   Can we skip calibration and still get meaningful output? I think it is 
unlikely we can update our CI for the sake of this tutorial.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412113613



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   ok this is because of this line
   
https://github.com/apache/incubator-tvm/blob/22db299b33f05570db2a5a406bdb37b57198a822/python/tvm/relay/frontend/pytorch.py#L1832
   
   Can you apply `_wrap_const` on it? See how badly it breaks the rest of test 
cases. Returning a raw torch tensor is not a good idea anyway.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412107805



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   
![image](https://user-images.githubusercontent.com/15828974/79861643-5800e780-83f2-11ea-8bff-f9d76382d2c3.png)
   
   Actually i never saw numpy array as input, see above image, here the second 
input is `tensor([[0, 0],
   [1, 0]])` which is torch.Tensor





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412101164



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   I still don't see how we get torch.Tensor as input? Inputs should always 
be relay value. From Torch tensor, we should get the numpy array, and wrap 
relay.const on it. Having torch tensor in op inputs is a logic bug.  





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412099376



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   I mean the body of Var and Call case is the same, `data = inp`, you 
don't need two if blocks





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412097547



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   
![image](https://user-images.githubusercontent.com/15828974/79860349-25ee8600-83f0-11ea-8552-3e4c04a1bf97.png)
   
   You can see the type of inputs, which im printing.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412096961



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   
![image](https://user-images.githubusercontent.com/15828974/79860322-153e1000-83f0-11ea-982f-e0f4b561a23b.png)
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412096961



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   
![image](https://user-images.githubusercontent.com/15828974/79860322-153e1000-83f0-11ea-982f-e0f4b561a23b.png)
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


siju-samuel commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412094229



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   if one of the input is `torch.ones,`  it will first convert to 
`_expr.Call `for `torch.ones`, for these kind of inputs it will happen. 
actually we only need to differentiate `torch.Tensor` as tvm ops cannot input 
`torch.tensor`. 
   Any tvm expr as input, tvm will handle. 
   Whereever having check for `_expr.var` & `torch.tensor` is there, need to 
handle `_expr.call` also.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi edited a comment on issue #5363: [Topi, ARM] Disbale Winograd for quantized tensors.

2020-04-21 Thread GitBox


masahi edited a comment on issue #5363:
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-617110577


   Thanks @anijain2305 @cbalint13 @tqchen 
   
   We should follow up to have a proper quantized winograd for ARM and possibly 
for x86 too. I'm interested in this topic, but I don't have much time recently.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on issue #5363: [Topi, ARM] Disbale Winograd for quantized tensors.

2020-04-21 Thread GitBox


masahi commented on issue #5363:
URL: https://github.com/apache/incubator-tvm/pull/5363#issuecomment-617110577


   Thanks @anijain2305 @cbalint13 @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated: [Topi, ARM] Disbale Winograd for quantized tensors. (#5363)

2020-04-21 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new b39bd83  [Topi, ARM] Disbale Winograd for quantized tensors. (#5363)
b39bd83 is described below

commit b39bd83172c46c61d64387fd0ee42ea477032200
Author: Animesh Jain 
AuthorDate: Tue Apr 21 04:06:13 2020 -0700

[Topi, ARM] Disbale Winograd for quantized tensors. (#5363)

* [Topi, ARM] Disbale Winograd for quantized tensors.

* Relaxing float
---
 python/tvm/relay/op/strategy/arm_cpu.py | 10 --
 1 file changed, 8 insertions(+), 2 deletions(-)

diff --git a/python/tvm/relay/op/strategy/arm_cpu.py 
b/python/tvm/relay/op/strategy/arm_cpu.py
index bcef8ab..942d4c7 100644
--- a/python/tvm/relay/op/strategy/arm_cpu.py
+++ b/python/tvm/relay/op/strategy/arm_cpu.py
@@ -59,16 +59,22 @@ def conv2d_strategy_arm_cpu(attrs, inputs, out_type, 
target):
 wrap_compute_conv2d(topi.arm_cpu.conv2d_nchw_spatial_pack),
 
wrap_topi_schedule(topi.arm_cpu.schedule_conv2d_nchw_spatial_pack),
 name="conv2d_nchw_spatial_pack.arm_cpu")
+
 # Intel x86 conv2d schedule.
 strategy.add_implementation(
 wrap_compute_conv2d(topi.x86.conv2d_nchw),
 wrap_topi_schedule(topi.x86.schedule_conv2d_nchw),
 name="conv2d_nchw.x86")
+
 # check if winograd algorithm is applicable
 _, _, kh, kw = get_const_tuple(kernel.shape)
 pt, pl, pb, pr = topi.nn.get_pad_tuple(padding, (kh, kw))
-if kh == 3 and kw == 3 and stride_h == 1 and stride_w == 1 and 
\
-dilation_h == 1 and dilation_w == 1:
+is_winograd_applicable = "float" in data.dtype and \
+ "float" in kernel.dtype and \
+ kh == 3 and kw == 3 and \
+ stride_h == 1 and stride_w == 1 and \
+ dilation_h == 1 and dilation_w == 1
+if is_winograd_applicable:
 strategy.add_implementation(
 wrap_compute_conv2d(topi.arm_cpu.conv2d_nchw_winograd),
 
wrap_topi_schedule(topi.arm_cpu.schedule_conv2d_nchw_winograd),



[incubator-tvm] branch master updated: Add ability to have multiple copies of same input to onnx_inputs. (#5389)

2020-04-21 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/master by this push:
 new 5ce2c29  Add ability to have multiple copies of same input to 
onnx_inputs. (#5389)
5ce2c29 is described below

commit 5ce2c2968ad4416f6a3087542430bc7204ed06ba
Author: Josh Fromm 
AuthorDate: Tue Apr 21 03:57:13 2020 -0700

Add ability to have multiple copies of same input to onnx_inputs. (#5389)
---
 python/tvm/relay/frontend/onnx.py  |  3 +--
 tests/python/frontend/onnx/test_forward.py | 11 ++-
 2 files changed, 7 insertions(+), 7 deletions(-)

diff --git a/python/tvm/relay/frontend/onnx.py 
b/python/tvm/relay/frontend/onnx.py
index 527a1ed..245b385 100644
--- a/python/tvm/relay/frontend/onnx.py
+++ b/python/tvm/relay/frontend/onnx.py
@@ -57,8 +57,7 @@ class onnx_input():
 if isinstance(item, int):
 self.input_dict[self.input_keys[item]] = value
 elif isinstance(item, str):
-if item not in self.input_dict:
-self.input_keys.append(item)
+self.input_keys.append(item)
 self.input_dict[item] = value
 else:
 raise ValueError("Only integer and string indexed writes allowed.")
diff --git a/tests/python/frontend/onnx/test_forward.py 
b/tests/python/frontend/onnx/test_forward.py
index 2c08494..c06aa50 100644
--- a/tests/python/frontend/onnx/test_forward.py
+++ b/tests/python/frontend/onnx/test_forward.py
@@ -1366,16 +1366,16 @@ def test_binary_ops():
 dtype = "float32"
 out_shape = in_shape
 
-def verify_binary_ops(op, x, y, out_np, broadcast=None):
+def verify_binary_ops(op, x, y, out_np, x_name='in1', y_name='in2', 
broadcast=None):
 if broadcast is None:
-z = helper.make_node(op, ['in1', 'in2'], ['out'])
+z = helper.make_node(op, [x_name, y_name], ['out'])
 else:
-z = helper.make_node(op, ['in1', 'in2'], ['out'], broadcast=1)
+z = helper.make_node(op, [x_name, y_name], ['out'], broadcast=1)
 graph = helper.make_graph([z],
   '_test',
-  inputs=[helper.make_tensor_value_info("in1",
+  inputs=[helper.make_tensor_value_info(x_name,
 
TensorProto.FLOAT, list(in_shape)),
-  helper.make_tensor_value_info("in2",
+  helper.make_tensor_value_info(y_name,
 
TensorProto.FLOAT, list(in_shape))],
   outputs=[helper.make_tensor_value_info("out",
  
TensorProto.FLOAT, list(out_shape))])
@@ -1393,6 +1393,7 @@ def test_binary_ops():
 verify_binary_ops("Sub", x, z, x - z, broadcast=True)
 verify_binary_ops("Mul", x, y, x * y, broadcast=None)
 verify_binary_ops("Mul", x, z,  x * z, broadcast=True)
+verify_binary_ops("Mul", x, x, x * x, x_name='in1', y_name='in1', 
broadcast=None)
 verify_binary_ops("Div", x, y, x / y, broadcast=None)
 verify_binary_ops("Div", x, z, x / z, broadcast=True)
 verify_binary_ops("Sum", x, y, x + y, broadcast=None)



[GitHub] [incubator-tvm] masahi commented on issue #5389: [Relay][Frontend][Onnx] Fix multiple identical inputs bug

2020-04-21 Thread GitBox


masahi commented on issue #5389:
URL: https://github.com/apache/incubator-tvm/pull/5389#issuecomment-617106831


   Thanks @jwfromm 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412082233



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   Does this really happen? I don't expect that. Could be a logic bug if it 
does.
   If this doesn't happen, I don't think we need this function.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412082233



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   Does this really happen? I don't expect that. Could be a logic bug if it 
does.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412081790



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   ```
   if isinstance(inp, (_expr.Var, _expr.Call):
   data = inp
   
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412081790



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):

Review comment:
   ```
   if isinstance(inp, (_expr.Var, _expr.Call)):
   data = inp
   
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi commented on a change in pull request #5383: [PYTORCH]where, addcdiv, addcmul op support

2020-04-21 Thread GitBox


masahi commented on a change in pull request #5383:
URL: https://github.com/apache/incubator-tvm/pull/5383#discussion_r412082233



##
File path: python/tvm/relay/frontend/pytorch.py
##
@@ -337,6 +337,55 @@ def _impl(inputs, input_types):
 return _op.transform.repeat(data, repeats=repeats, axis=axis)
 return _impl
 
+
+def _parse_input_data(inp):
+import torch
+if isinstance(inp, _expr.Var):
+data = inp
+elif isinstance(inp, _expr.Call):
+data = inp
+elif isinstance(inp, torch.Tensor):

Review comment:
   Does this really happen? I don't expect that.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on issue #5373: [REFACTOR][TIR] Migrate HoistIfThenElse to the unified pass manager

2020-04-21 Thread GitBox


kevinthesun commented on issue #5373:
URL: https://github.com/apache/incubator-tvm/issues/5373#issuecomment-617007842


   @tqchen Sorry I'm quite busy these days. Will work on it when I get time.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org