[GitHub] [incubator-tvm] lixiaoquan opened a new pull request #6704: [Relay] Mix mode type inference

2020-10-16 Thread GitBox


lixiaoquan opened a new pull request #6704:
URL: https://github.com/apache/incubator-tvm/pull/6704


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch commented on pull request #6703: [µTVM] Add virtual machine, test zephyr runtime on real hardware

2020-10-16 Thread GitBox


areusch commented on pull request #6703:
URL: https://github.com/apache/incubator-tvm/pull/6703#issuecomment-710744702


   also, vagrant boxes currently hosted at: 
https://app.vagrantup.com/areusch/boxes/microtvm-staging



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] areusch opened a new pull request #6703: [µTVM] Add virtual machine, test zephyr runtime on real hardware

2020-10-16 Thread GitBox


areusch opened a new pull request #6703:
URL: https://github.com/apache/incubator-tvm/pull/6703


   This PR adds two Vagrantfiles:
- a µTVM base box in `tools/microtvm/base-box` intended to support general 
µTVM development. it includes all the dependencies necessary to build the 
Zephyr runtime and test it with attached hardware (I.e. use USB port 
forwarding). This means it includes cross-compilers for RISC-V, ARM, and x86, 
among others (see Zephyr SDK).
   - a specialization of the base box which mounts the local `tvm` directory 
using Host-VM shared folders, builds your local copy of TVM inside the VM, then 
creates a poetry (Python) virtualenv containing all TVM and Zephyr 
dependencies. You can use this VM to test µTVM against real hardware, for 
example:
   `tvm@microtvm:/Users/andrew/ws/tvm2$ TVM_LIBRARY_PATH=build-microtvm 
poetry run python3 tests/micro/qemu/test_zephyr.py 
--microtvm-platforms=stm32f746xx -s`
   
   This PR also includes additional transports needed to talk to real hardware, 
specifically a pySerial-based RPC transport layer plus utilities to invoke GDB 
to debug e.g. runtime problems, bad operator implementations, and to help with 
porting to new architectures. Because µTVM aims to be platform-agnostic, µTVM 
assumes only that some shell command exists to launch GDB and connect to the 
SoC's debug port. Due to this constraint, an additional RPC server is included: 
`tvm.exec.microtvm_debug_shell`, which uses the event-driven RPC server to host 
the debugger in a dedicated shell, so that signals can be forwarded to the 
inferior GDB.
   
   cc @tmoreau89 @tqchen @u99127 @tom-gall @liangfu @mshawcroft 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #6702: Resolve more warnings in msvc

2020-10-16 Thread GitBox


tqchen opened a new pull request #6702:
URL: https://github.com/apache/incubator-tvm/pull/6702


   cc @yzhliu @ZihengJiang 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506775666



##
File path: python/tvm/relay/frontend/mxnet.py
##
@@ -627,6 +632,21 @@ def _mx_expand_dims(inputs, attrs):
 return _op.expand_dims(inputs[0], axis=axis)
 
 
+def _mx_where(inputs, attrs):

Review comment:
   I removed the `_mx_where` and used the old implementation.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506769946



##
File path: python/tvm/topi/x86/batch_matmul.py
##
@@ -157,6 +163,10 @@ def batch_matmul_cblas(cfg, x, y):
 YB, N, YK = get_const_tuple(y.shape)
 assert XB == YB, "batch dimension doesn't match"
 assert XK == YK, "shapes of x and y is inconsistant"
+if out_shape is not None:
+assert out_shape[0] == XB, "got invalid output shape"
+assert out_shape[1] == M, "got invalid output shape"
+assert out_shape[2] == N, "got invalid output shape"

Review comment:
   Yes, I triggered this when I'm following the blog 
https://medium.com/apache-mxnet/speed-up-your-bert-inference-by-3x-on-cpus-using-apache-tvm-9cf7776cd7f8
 .





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


comaniac commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506769623



##
File path: python/tvm/topi/x86/batch_matmul.py
##
@@ -157,6 +163,10 @@ def batch_matmul_cblas(cfg, x, y):
 YB, N, YK = get_const_tuple(y.shape)
 assert XB == YB, "batch dimension doesn't match"
 assert XK == YK, "shapes of x and y is inconsistant"
+if out_shape is not None:
+assert out_shape[0] == XB, "got invalid output shape"
+assert out_shape[1] == M, "got invalid output shape"
+assert out_shape[2] == N, "got invalid output shape"

Review comment:
   This is interesting...I searched for all batch_matmul computes and found 
that this is the only one that misses one argument. It means this compute is 
never used before.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506766614



##
File path: cmake/modules/RustExt.cmake
##
@@ -0,0 +1,26 @@
+if(USE_RUST_EXT AND NOT USE_RUST_EXT EQUAL OFF)

Review comment:
   I think `if(USE_RUST_EXT)` should be fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506766711



##
File path: cmake/modules/RustExt.cmake
##
@@ -0,0 +1,26 @@
+if(USE_RUST_EXT AND NOT USE_RUST_EXT EQUAL OFF)

Review comment:
   _if(constant)
   True if the constant is 1, ON, YES, TRUE, Y, or a non-zero number. False if 
the constant is 0, OFF, NO, FALSE, N, IGNORE, NOTFOUND, the empty string, or 
ends in the suffix -NOTFOUND. Named boolean constants are case-insensitive. If 
the argument is not one of these constants, it is treated as a variable._





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506766711



##
File path: cmake/modules/RustExt.cmake
##
@@ -0,0 +1,26 @@
+if(USE_RUST_EXT AND NOT USE_RUST_EXT EQUAL OFF)

Review comment:
   _if()
   True if the constant is 1, ON, YES, TRUE, Y, or a non-zero number. False if 
the constant is 0, OFF, NO, FALSE, N, IGNORE, NOTFOUND, the empty string, or 
ends in the suffix -NOTFOUND. Named boolean constants are case-insensitive. If 
the argument is not one of these constants, it is treated as a variable._





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


jroesch commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506766408



##
File path: cmake/modules/RustExt.cmake
##
@@ -0,0 +1,26 @@
+if(USE_RUST_EXT AND NOT USE_RUST_EXT EQUAL OFF)

Review comment:
   I am not a great CMake user so just let me know what the best way to do 
things is. I just have no clue what the "right way" in CMake is. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506766247



##
File path: python/tvm/topi/x86/batch_matmul.py
##
@@ -157,6 +163,10 @@ def batch_matmul_cblas(cfg, x, y):
 YB, N, YK = get_const_tuple(y.shape)
 assert XB == YB, "batch dimension doesn't match"
 assert XK == YK, "shapes of x and y is inconsistant"
+if out_shape is not None:
+assert out_shape[0] == XB, "got invalid output shape"
+assert out_shape[1] == M, "got invalid output shape"
+assert out_shape[2] == N, "got invalid output shape"

Review comment:
   The reason is that if we do not add this, running the end-to-end script 
with `target = "llvm -mcpu=skylake-avx512 -libs=cblas"` will trigger the 
following error:
   ```python
   TypeError: Traceback (most recent call last):
 [bt] (8) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::backend::GraphRuntimeCodegen::VisitExpr_(tvm::relay::CallNode
 const*)+0xf12) [0x7f8f383774b2]
 [bt] (7) /home/ubuntu/tvm/build/libtvm.so(+0xf87235) [0x7f8f3834b235]
 [bt] (6) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::CompileEngineImpl::LowerInternal(tvm::relay::CCacheKey
 const&)+0x8a1) [0x7f8f38355f81]
 [bt] (5) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::Create(tvm::relay::Function
 const&)+0x25b) [0x7f8f3835265b]
 [bt] (4) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::backend::MemoizedExprTranslator >::VisitExpr(tvm::RelayExpr const&)+0xa9) [0x7f8f38358b89]
 [bt] (3) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::ExprFunctor (tvm::RelayExpr const&)>::VisitExpr(tvm::RelayExpr const&)+0x82) 
[0x7f8f38358952]
 [bt] (2) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::ExprFunctor (tvm::RelayExpr const&)>::InitVTable()::{lambda(tvm::runtime::ObjectRef 
const&, tvm::relay::ExprFunctor 
(tvm::RelayExpr const&)>*)#6}::_FUN(tvm::runtime::ObjectRef const&, 
tvm::relay::ExprFunctor 
(tvm::RelayExpr const&)>*)+0x27) [0x7f8f3834b717]
 [bt] (1) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::ScheduleGetter::VisitExpr_(tvm::relay::CallNode
 const*)+0x68c) [0x7f8f3835175c]
 [bt] (0) /home/ubuntu/tvm/build/libtvm.so(+0x112beab) [0x7f8f384efeab]
 File "tvm/_ffi/_cython/./packed_func.pxi", line 55, in 
tvm._ffi._cy3.core.tvm_callback
 File "/home/ubuntu/tvm/python/tvm/relay/backend/compile_engine.py", line 
284, in lower_call
   best_impl, outputs = select_implementation(op, call.attrs, inputs, 
ret_type, target)
 File "/home/ubuntu/tvm/python/tvm/relay/backend/compile_engine.py", line 
206, in select_implementation
   outs = impl.compute(attrs, inputs, out_type)
 File "/home/ubuntu/tvm/python/tvm/relay/op/op.py", line 91, in compute
   return _OpImplementationCompute(self, attrs, inputs, out_type)
 File "tvm/_ffi/_cython/./packed_func.pxi", line 321, in 
tvm._ffi._cy3.core.PackedFuncBase.__call__
 File "tvm/_ffi/_cython/./packed_func.pxi", line 266, in 
tvm._ffi._cy3.core.FuncCall
 File "tvm/_ffi/_cython/./base.pxi", line 160, in tvm._ffi._cy3.core.CALL
 [bt] (3) /home/ubuntu/tvm/build/libtvm.so(TVMFuncCall+0x65) 
[0x7f8f384f3205]
 [bt] (2) /home/ubuntu/tvm/build/libtvm.so(+0x104b8c8) [0x7f8f3840f8c8]
 [bt] (1) 
/home/ubuntu/tvm/build/libtvm.so(tvm::relay::OpImplementation::Compute(tvm::Attrs
 const&, tvm::runtime::Array const&, tvm::Type 
const&)+0xb1) [0x7f8f3840f691]
 [bt] (0) /home/ubuntu/tvm/build/libtvm.so(+0x112beab) [0x7f8f384efeab]
 File "tvm/_ffi/_cython/./packed_func.pxi", line 55, in 
tvm._ffi._cy3.core.tvm_callback
 File "/home/ubuntu/tvm/python/tvm/relay/op/strategy/generic.py", line 686, 
in _compute_batch_matmul
   return [topi_compute(inputs[0], inputs[1], out_type.shape)]
 File "/home/ubuntu/tvm/python/tvm/autotvm/task/topi_integration.py", line 
162, in wrapper
   node = topi_compute(cfg, *args)
   TypeError: batch_matmul_cblas() takes 3 positional arguments but 4 were given
   ```
   
   The root cause is that the logic here requires the batch_matmul to take the 
output_shape: 
   
https://github.com/apache/incubator-tvm/blob/461e75bd5ffaf45a0f270998514d63d11261/python/tvm/relay/op/strategy/generic.py#L685-L686





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] rkimball commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


rkimball commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506764888



##
File path: cmake/modules/RustExt.cmake
##
@@ -0,0 +1,26 @@
+if(USE_RUST_EXT AND NOT USE_RUST_EXT EQUAL OFF)

Review comment:
   We really need to stop overloading cmake bool and string. cmake is a 
simple thing and is easily confused. In the meantime us this instead. OFF is 
only one of the many valid values for cmake bool false.
   ```suggestion
   if(USE_RUST_EXT AND NOT ${USE_RUST_EXT} MATCHES ${IS_FALSE_PATTERN})
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


jroesch commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506762857



##
File path: rust/compiler-ext/src/lib.rs
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use env_logger;
+use tvm;
+use tvm::runtime::function::register_override;
+
+fn test_fn() -> Result<(), tvm::Error> {
+println!("Hello Greg from Rust!");
+Ok(())
+}
+
+fn test_fn2(message: tvm::runtime::string::String) -> Result<(), tvm::Error> {
+println!("The message: {}", message);
+Ok(())
+}
+
+tvm::export!(test_fn, test_fn2);
+
+#[no_mangle]
+fn compiler_ext_initialize() -> i32 {

Review comment:
   I think extern just sets symbol visibility, seems to have worked. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


jroesch commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506762875



##
File path: rust/compiler-ext/src/lib.rs
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use env_logger;
+use tvm;
+use tvm::runtime::function::register_override;
+
+fn test_fn() -> Result<(), tvm::Error> {
+println!("Hello Greg from Rust!");
+Ok(())
+}
+
+fn test_fn2(message: tvm::runtime::string::String) -> Result<(), tvm::Error> {
+println!("The message: {}", message);
+Ok(())
+}
+
+tvm::export!(test_fn, test_fn2);
+
+#[no_mangle]
+fn compiler_ext_initialize() -> i32 {

Review comment:
   I added it. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506762570



##
File path: rust/tvm/src/ir/diagnostics/codespan.rs
##
@@ -0,0 +1,183 @@
+use std::collections::HashMap;

Review comment:
   This file could use some comments

##
File path: rust/tvm/src/ir/diagnostics/codespan.rs
##
@@ -0,0 +1,183 @@
+use std::collections::HashMap;

Review comment:
   This file could use some comments/docs





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506761995



##
File path: rust/compiler-ext/src/lib.rs
##
@@ -0,0 +1,43 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+use env_logger;
+use tvm;
+use tvm::runtime::function::register_override;
+
+fn test_fn() -> Result<(), tvm::Error> {
+println!("Hello Greg from Rust!");
+Ok(())
+}
+
+fn test_fn2(message: tvm::runtime::string::String) -> Result<(), tvm::Error> {
+println!("The message: {}", message);
+Ok(())
+}
+
+tvm::export!(test_fn, test_fn2);
+
+#[no_mangle]
+fn compiler_ext_initialize() -> i32 {

Review comment:
   This doesn't need an extern?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on a change in pull request #6656: [Rust][Diagnostics] Add initial boilerplate for Rust diagnostic interface.

2020-10-16 Thread GitBox


jroesch commented on a change in pull request #6656:
URL: https://github.com/apache/incubator-tvm/pull/6656#discussion_r506761896



##
File path: rust/compiler-ext/Cargo.toml
##
@@ -0,0 +1,16 @@
+[package]
+name = "compiler-ext"
+version = "0.1.0"
+authors = ["Jared Roesch "]
+edition = "2018"
+# TODO(@jroesch): would be cool to figure out how to statically link instead.

Review comment:
   I figured out how to do this. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience commented on pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience commented on pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#issuecomment-710693179


   The integration tests take a very long time because there are two many 
combinations. For example: 
https://github.com/apache/incubator-tvm/blob/461e75bd5ffaf45a0f270998514d63d11261/tests/python/frontend/mxnet/test_forward.py#L2119-L2125
   
   We may try to simplify the tests by not using a full cartesian product



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige opened a new pull request #6701: Add cloudpickle dependency to docker images

2020-10-16 Thread GitBox


tkonolige opened a new pull request #6701:
URL: https://github.com/apache/incubator-tvm/pull/6701


   This dependency is needed for #6679. I also had to add a cmake build from 
source to the i386 image so that xgboost builds (it builds from source because 
there is no wheel for i386).
   
   @jroesch @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jwfromm opened a new pull request #6700: [Relay][Frontend][Onnx] Loop Support

2020-10-16 Thread GitBox


jwfromm opened a new pull request #6700:
URL: https://github.com/apache/incubator-tvm/pull/6700


   This PR adds a converter for Loop operators in Onnx. Loops in onnx are 
represented as entire onnx graphs embedded within the op. To support this 
structure, there are few changes to the GraphProto class to allow access to the 
parent graph from a called subgraph using scope. It's worth noting that I 
encountered some issues with type unification when strided slices were part of 
the subgraph. For now, I've added a warning that indicates errors might occur 
in this case, but will add corresponding tests once the issue is resolved.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jwfromm commented on pull request #6700: [Relay][Frontend][Onnx] Loop Support

2020-10-16 Thread GitBox


jwfromm commented on pull request #6700:
URL: https://github.com/apache/incubator-tvm/pull/6700#issuecomment-710681034


   @masahi @mbrookhart @csullivan @soiferj Can you guys help review this PR?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch commented on pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


jroesch commented on pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#issuecomment-710677184


   As we add more tests can we measure what kind of time increase this will 
induce in CI? integration tests are becoming increasingly slow and expensive to 
run. cc @areusch and @tkonolige 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-16 Thread GitBox


tqchen commented on a change in pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#discussion_r506732411



##
File path: include/tvm/ir/diagnostic_context.h
##
@@ -18,71 +18,25 @@
  */
 
 /*!
- * \file diagnostic.h
+ * \file diagnostic_context.h

Review comment:
   now that we have removed diagnostic.h I think it is better to just 
rename back to diagnostic.h for simplicity





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] jroesch merged pull request #6698: [LLVM][WINDOWS] Recover windows support for the latest LLVM

2020-10-16 Thread GitBox


jroesch merged pull request #6698:
URL: https://github.com/apache/incubator-tvm/pull/6698


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (e997185 -> 461e75b)

2020-10-16 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from e997185  [Relay] Change some passes to mix mode (#6695)
 add 461e75b  [LLVM][WINDOWS] Recover windows support for the latest LLVM 
(#6698)

No new revisions were added by this update.

Summary of changes:
 CMakeLists.txt |  2 ++
 apps/cpp_rpc/rpc_env.cc| 11 +--
 apps/cpp_rpc/win32_process.h   |  6 +-
 src/target/llvm/codegen_cpu.cc | 15 +++
 4 files changed, 31 insertions(+), 3 deletions(-)



[GitHub] [incubator-tvm] comaniac commented on a change in pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


comaniac commented on a change in pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#discussion_r506718116



##
File path: python/tvm/relay/frontend/mxnet.py
##
@@ -58,6 +58,11 @@
 _activation_map = {"sigmoid": _op.sigmoid, "tanh": _op.tanh, "relu": 
_op.nn.relu}
 
 
+def get_tuple_shape(shape_expr):

Review comment:
   Can we directly use `topi.util.get_const_tuple`?

##
File path: python/tvm/relay/frontend/mxnet.py
##
@@ -2312,23 +2345,76 @@ def _mx_npx_reshape(inputs, attrs):
 reverse = attrs.get_bool("reverse", False)
 shape_list = list(shape)
 new_shape_list = []
-for num in shape_list:
-if num > 0 or num == -1:
-new_shape_list.append(num)
-elif num == -2:
-new_shape_list.append(0)
-elif num == -4:
-new_shape_list.append(-2)
-elif num == -5:
-new_shape_list.append(-3)
-elif num == -6:
-new_shape_list.append(-4)
-else:
-raise tvm.error.OpAttributeInvalid("Shape dimension %d is not 
supported" % num)
-shape = tuple(new_shape_list)
-if reverse:
-return _op.reverse_reshape(inputs[0], newshape=shape)
-return _op.reshape(inputs[0], newshape=shape)
+if -3 not in shape_list:
+for num in shape_list:
+if num > 0 or num == -1:
+new_shape_list.append(num)
+elif num == -2:
+new_shape_list.append(0)
+elif num == -4:
+new_shape_list.append(-2)
+elif num == -5:
+new_shape_list.append(-3)
+elif num == -6:
+new_shape_list.append(-4)

Review comment:
   ```suggestion
   elif num in [-2, -4, -5, -6]:
   new_shape_list.append(num + 2)
   ```

##
File path: python/tvm/relay/frontend/mxnet.py
##
@@ -2312,23 +2345,76 @@ def _mx_npx_reshape(inputs, attrs):
 reverse = attrs.get_bool("reverse", False)
 shape_list = list(shape)
 new_shape_list = []
-for num in shape_list:
-if num > 0 or num == -1:
-new_shape_list.append(num)
-elif num == -2:
-new_shape_list.append(0)
-elif num == -4:
-new_shape_list.append(-2)
-elif num == -5:
-new_shape_list.append(-3)
-elif num == -6:
-new_shape_list.append(-4)
-else:
-raise tvm.error.OpAttributeInvalid("Shape dimension %d is not 
supported" % num)
-shape = tuple(new_shape_list)
-if reverse:
-return _op.reverse_reshape(inputs[0], newshape=shape)
-return _op.reshape(inputs[0], newshape=shape)
+if -3 not in shape_list:
+for num in shape_list:
+if num > 0 or num == -1:
+new_shape_list.append(num)
+elif num == -2:
+new_shape_list.append(0)
+elif num == -4:
+new_shape_list.append(-2)
+elif num == -5:
+new_shape_list.append(-3)
+elif num == -6:
+new_shape_list.append(-4)
+else:
+raise tvm.error.OpAttributeInvalid("Shape dimension %d is not 
supported" % num)
+shape = tuple(new_shape_list)
+if reverse:
+return _op.reverse_reshape(inputs[0], newshape=shape)
+return _op.reshape(inputs[0], newshape=shape)
+else:
+old_shape = get_tuple_shape(_infer_type(inputs[0]).checked_type.shape)
+new_shape = []
+if reverse:
+old_shape = old_shape[::-1]
+shape_list = shape_list[::-1]
+ptr = 0
+unknown_axis = None
+src_ptr = 0
+while src_ptr < len(shape_list):
+ele = shape_list[src_ptr]
+src_ptr += 1
+if ele > 0:
+new_shape.append(ele)
+ptr += 1
+elif ele == -1:
+new_shape.append(-1)
+assert unknown_axis is None, "Can only have one unknown axis."
+unknown_axis = len(new_shape)
+ptr += 1
+elif ele == -2:
+new_shape.append(old_shape[ptr])
+ptr += 1
+elif ele == -3:
+assert old_shape[ptr] == 1

Review comment:
   Better to have an error message. Ditto to other asserts.

##
File path: python/tvm/topi/x86/batch_matmul.py
##
@@ -157,6 +163,10 @@ def batch_matmul_cblas(cfg, x, y):
 YB, N, YK = get_const_tuple(y.shape)
 assert XB == YB, "batch dimension doesn't match"
 assert XK == YK, "shapes of x and y is inconsistant"
+if out_shape is not None:
+assert out_shape[0] == XB, "got invalid output shape"
+assert out_shape[1] == M, "got invalid output shape"
+assert out_shape[2] == N, "got invalid output shape"

Review comment:
   Why we need an additional output shape argument if we can figure 

[GitHub] [incubator-tvm] rkimball commented on pull request #6698: [LLVM][WINDOWS] Recover windows support for the latest LLVM

2020-10-16 Thread GitBox


rkimball commented on pull request #6698:
URL: https://github.com/apache/incubator-tvm/pull/6698#issuecomment-710586184


   I tested with llvm 11.0 release and this PR and my failing unit test now 
passes. Thank you @tqchen for this.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience commented on pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience commented on pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699#issuecomment-710579653


   @yzhliu @comaniac @icemelon9 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience opened a new pull request #6699: [Frontend][Relay] Fix MXNet frontend to support NLP backbones in GluonNLP

2020-10-16 Thread GitBox


sxjscience opened a new pull request #6699:
URL: https://github.com/apache/incubator-tvm/pull/6699


   Fix the MXNet 2.0 integration in relay. Tested the BERT and ALBERT model in 
the new [GluonNLP 1.0](https://github.com/dmlc/gluon-nlp/tree/master) and has 
passed the test. I will later add unittests in GluonNLP side to ensure that the 
backbones can be run with the graph runtime.
   
   ```python
   import mxnet as mx
   import numpy as np
   import gluonnlp
   from gluonnlp.models import get_backbone
   import numpy.testing as npt
   
   mx.npx.set_np()
   
   model_cls, cfg, tokenizer, backbone_param_path, _ = 
get_backbone('google_albert_base_v2')
   
   model = model_cls.from_cfg(cfg)
   model.load_parameters(backbone_param_path)
   model.hybridize()
   
   
   batch_size = 1
   seq_length = 128
   token_ids = mx.np.random.randint(0, cfg.MODEL.vocab_size, (batch_size, 
seq_length), dtype=np.int32)
   token_types = mx.np.random.randint(0, 2, (batch_size, seq_length), 
dtype=np.int32)
   valid_length = mx.np.random.randint(seq_length // 2, seq_length, 
(batch_size,), dtype=np.int32)
   mx_out = model(token_ids, token_types, valid_length)
   
   import tvm
   from tvm import relay
   import tvm.contrib.graph_runtime as runtime
   
   shape_dict = {
   'data0': (batch_size, seq_length),
   'data1': (batch_size, seq_length),
   'data2': (batch_size,)
   }
   
   dtype_dict = {
   'data0': 'int32',
   'data1': 'int32',
   'data2': 'int32'
   }
   
   sym = model._cached_graph[1]
   
   params = {}
   for k, v in model.collect_params().items():
   params[v._var_name] = tvm.nd.array(v.data().asnumpy())
   mod, params = relay.frontend.from_mxnet(sym, shape=shape_dict, 
dtype=dtype_dict, arg_params=params)
   print(mod)
   # G4
   target = "cuda -model=t4"
   
   with relay.build_config(opt_level=3, required_pass=["FastMath"]):
   graph, lib, cparams = relay.build(mod, target, params=params)
   
   ctx = tvm.gpu()
   rt = runtime.create(graph, lib, ctx)
   rt.set_input(**cparams)
   rt.set_input(data0=token_ids, data1=token_types, data2=valid_length)
   rt.run()
   for i in range(rt.get_num_outputs()):
   out = rt.get_output(i)
   print(out.asnumpy())# verify the correctness
   npt.assert_allclose(out.asnumpy(), mx_out[i].asnumpy(), rtol=1e-3, 
atol=1e-2)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience closed pull request #6696: [Frontend][Relay][WIP] Fix MXNet frontend to support NLP backbones in GluonNLP V1

2020-10-16 Thread GitBox


sxjscience closed pull request #6696:
URL: https://github.com/apache/incubator-tvm/pull/6696


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tkonolige commented on a change in pull request #6685: [Relay][Frontend] SparseTensorDenseMatMul support for Tensorflow

2020-10-16 Thread GitBox


tkonolige commented on a change in pull request #6685:
URL: https://github.com/apache/incubator-tvm/pull/6685#discussion_r506601559



##
File path: tests/python/frontend/tensorflow/test_forward.py
##
@@ -1750,6 +1750,64 @@ def test_forward_batch_matmul():
 _test_batch_matmul((2, 3, 4, 2, 3, 4, 5, 6), (2, 3, 4, 2, 3, 4, 5, 6), 
"float32", False, True)
 
 
+###
+# SparseTensorDenseMatMul
+# --
+
+
+def _test_sparse_dense_matmul(indices, values, A_shape, B_shape, dtype, 
flip=False):

Review comment:
   You pass in indices and values here, but then you don't use them? Or am 
I missing something?

##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -890,6 +890,44 @@ def _impl(inputs, attr, params, mod):
 return _impl
 
 
+def _sparse_tensor_dense_matmul():
+# Sparse utility from Numpy
+from scipy import sparse
+
+def _impl(inputs, attr, params, mod):
+assert len(inputs) == 4, "There should be 4 input tensors"
+
+indices_tensor = _infer_value(inputs[0], params, mod).asnumpy()
+values_tensor = _infer_value(inputs[1], params, mod).asnumpy()
+dense_shape_tensor = _infer_value(inputs[2], params, mod).asnumpy()
+
+data = inputs[3]
+
+rows = [x[0] for x in indices_tensor]
+cols = [x[1] for x in indices_tensor]
+
+# Create Numpy sparse Tensor(CSR)
+weight_sp = sparse.csr_matrix(
+(values_tensor, (rows, cols)), 
shape=tuple(dense_shape_tensor.tolist())
+)
+weight_sp = sparse.csr_matrix(weight_sp.transpose())
+
+weight_data = _expr.const(weight_sp.data, weight_sp.data.dtype)
+weight_indptrs = _expr.const(weight_sp.indptr, weight_sp.indptr.dtype)
+weight_indices = _expr.const(weight_sp.indices, 
weight_sp.indices.dtype)
+
+ret = _op.nn.sparse_dense(data, [weight_data, weight_indices, 
weight_indptrs])

Review comment:
   Reading the Tensorflow docs 
(https://www.tensorflow.org/api_docs/python/tf/sparse/sparse_dense_matmul), it 
looks like Tensorflow is computing A*B where A is sparse. Our sparse_dense 
computes B*A^T. You've done the transposition, but the order seems wrong.

##
File path: python/tvm/topi/cuda/sparse.py
##
@@ -367,9 +367,14 @@ def _alter_sparse_dense_layout(_attrs, inputs, _tinfos, 
_out_type):
 and isinstance(inputs[2], relay.Constant)
 and isinstance(inputs[3], relay.Constant)
 ):
-sparse_matrix = sp.bsr_matrix(
-(inputs[1].data.asnumpy(), inputs[2].data.asnumpy(), 
inputs[3].data.asnumpy())
-)
+if len(inputs[1].data.asnumpy().shape) == 1:

Review comment:
   Why this change? If you pass a 1D array to `bsr_matrix` it work just 
fine.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6698: [LLVM][WINDOWS] Recover windows support for the latest LLVM

2020-10-16 Thread GitBox


tqchen commented on pull request #6698:
URL: https://github.com/apache/incubator-tvm/pull/6698#issuecomment-710202863


   cc @tmoreau89 @rkimball 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen opened a new pull request #6698: [LLVM][WINDOWS] Recover windows support for the latest LLVM

2020-10-16 Thread GitBox


tqchen opened a new pull request #6698:
URL: https://github.com/apache/incubator-tvm/pull/6698


   Windows COFF requires comdat information to support weak-like linkage(via 
any).
   This patch fixes the windows LLVM support after LLVM-8.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen edited a comment on pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-16 Thread GitBox


tqchen edited a comment on pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#issuecomment-710126019


   Thanks @rkimball . I feel the PR can still be simplified further to avoid 
potential confusion of two diagnostic header files(one minimum and another 
built for IR). In particular we can do:
   
   - Keep all the content of ir/diagnostic.h as it is in IR, because they 
introduce a larger trunk of deps, it should be part of IR and tvm runtime 
should not depend on them.
   - Move all the ICHECK def into support/logging.h, where there is already 
some previous related definitions of checks, this is a minimum check macro that 
can be used in runtime
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated: [Relay] Change some passes to mix mode (#6695)

2020-10-16 Thread tqchen
This is an automated email from the ASF dual-hosted git repository.

tqchen pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new e997185  [Relay] Change some passes to mix mode (#6695)
e997185 is described below

commit e997185795480d24075a2e7d3fa42ccec425b5f6
Author: lixiaoquan 
AuthorDate: Fri Oct 16 23:47:27 2020 +0800

[Relay] Change some passes to mix mode (#6695)
---
 src/relay/analysis/util.cc|  8 ++--
 src/relay/analysis/well_formed.cc | 16 +++-
 src/relay/ir/expr_functor.cc  |  4 +++-
 src/relay/transforms/de_duplicate.cc  |  6 --
 src/relay/transforms/fold_constant.cc | 32 
 5 files changed, 36 insertions(+), 30 deletions(-)

diff --git a/src/relay/analysis/util.cc b/src/relay/analysis/util.cc
index 59ce01c..edf8fb6 100644
--- a/src/relay/analysis/util.cc
+++ b/src/relay/analysis/util.cc
@@ -71,7 +71,7 @@ class TypeVarTVisitor : public TypeVisitor {
   InsertionSet* bound_type_vars_;
 };
 
-class TypeVarEVisitor : private ExprVisitor {
+class TypeVarEVisitor : private MixedModeVisitor {
  public:
   explicit TypeVarEVisitor(const IRModule& mod) : mod_(mod) {}
 
@@ -131,6 +131,8 @@ class TypeVarEVisitor : private ExprVisitor {
 return CollectAll();
   }
 
+  using MixedModeVisitor::VisitExpr_;
+
   void VisitExpr_(const FunctionNode* f) final {
 for (const auto& tp : f->type_params) {
   type_vars_.Insert(tp);
@@ -159,7 +161,7 @@ class TypeVarEVisitor : private ExprVisitor {
   const IRModule& mod_;
 };
 
-class VarVisitor : protected ExprVisitor, protected PatternVisitor {
+class VarVisitor : protected MixedModeVisitor, protected PatternVisitor {
  public:
   Array Free(const Expr& expr) {
 this->VisitExpr(expr);
@@ -204,6 +206,8 @@ class VarVisitor : protected ExprVisitor, protected 
PatternVisitor {
 vars_.Insert(v);
   }
 
+  using MixedModeVisitor::VisitExpr_;
+
   void VisitExpr_(const VarNode* var) final { vars_.Insert(GetRef(var)); }
 
   void VisitExpr_(const FunctionNode* op) final {
diff --git a/src/relay/analysis/well_formed.cc 
b/src/relay/analysis/well_formed.cc
index 3e409d1..5abbbc9 100644
--- a/src/relay/analysis/well_formed.cc
+++ b/src/relay/analysis/well_formed.cc
@@ -32,7 +32,7 @@ namespace tvm {
 namespace relay {
 
 //! brief make sure each Var is bound at most once in a scope.
-class WellFormedChecker : private ExprVisitor, PatternVisitor {
+class WellFormedChecker : private MixedModeVisitor, PatternVisitor {
  public:
   Optional diag_ctx;
   Span occurs_in;
@@ -79,6 +79,8 @@ class WellFormedChecker : private ExprVisitor, PatternVisitor 
{
 total_bound.insert(v);
   }
 
+  using MixedModeVisitor::VisitExpr_;
+
   void VisitExpr_(const VarNode* op) final {
 Var v = GetRef(op);
 if (current_bound.count(v) == 0) {
@@ -126,7 +128,7 @@ class WellFormedChecker : private ExprVisitor, 
PatternVisitor {
 
 // CHECK(call->attrs.defined());
 CHECK(call->type_args.defined());
-ExprVisitor::VisitExpr_(call);
+MixedModeVisitor::VisitExpr_(call);
   }
 
   void VisitClause(const Clause& c) final {
@@ -139,18 +141,14 @@ class WellFormedChecker : private ExprVisitor, 
PatternVisitor {
 
   void VisitVar(const Var& v) final { Bound(v); }
 
-  void VisitExpr(const Expr& e) final {
+ public:
+  bool CheckWellFormed(const Expr& e) {
 if (auto v = e.as()) {
   VisitExpr_(v);
 } else {
   // this->occurs_in = e->span;
-  ExprVisitor::VisitExpr(e);
+  VisitExpr(e);
 }
-  }
-
- public:
-  bool CheckWellFormed(const Expr& e) {
-this->VisitExpr(e);
 return well_formed;
   }
 };
diff --git a/src/relay/ir/expr_functor.cc b/src/relay/ir/expr_functor.cc
index cbc41d2..a09179b 100644
--- a/src/relay/ir/expr_functor.cc
+++ b/src/relay/ir/expr_functor.cc
@@ -517,10 +517,12 @@ 
TVM_REGISTER_GLOBAL("relay.analysis.post_order_visit").set_body_typed([](Expr ex
 });
 
 // Implement bind.
-class ExprBinder : public ExprMutator, PatternMutator {
+class ExprBinder : public MixedModeMutator, PatternMutator {
  public:
   explicit ExprBinder(const tvm::Map& args_map) : 
args_map_(args_map) {}
 
+  using MixedModeMutator::VisitExpr_;
+
   Expr VisitExpr_(const LetNode* op) final {
 CHECK(!args_map_.count(op->var)) << "Cannot bind an internel variable in 
let";
 return ExprMutator::VisitExpr_(op);
diff --git a/src/relay/transforms/de_duplicate.cc 
b/src/relay/transforms/de_duplicate.cc
index d90e5c5..8c62fe6 100644
--- a/src/relay/transforms/de_duplicate.cc
+++ b/src/relay/transforms/de_duplicate.cc
@@ -31,7 +31,7 @@ namespace tvm {
 namespace relay {
 
 Expr DeDup(const Expr& e) {
-  class DeDupMutator : public TypeMutator, public ExprMutator, public 
PatternMutator {
+  class DeDupMutator : public TypeMutator, public MixedModeMutator, public 
PatternMutator {
public:
 TypeVar Fresh(const TypeVar& tv) {
   

[GitHub] [incubator-tvm] tqchen merged pull request #6695: [Relay] Change some passes to mix mode

2020-10-16 Thread GitBox


tqchen merged pull request #6695:
URL: https://github.com/apache/incubator-tvm/pull/6695


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-16 Thread GitBox


tqchen commented on pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#issuecomment-710126019


   As a minimum set of changes to the PR. i would recommend keep the files as 
they are, and just move the ICHECK defs into `support/logging.h`
   
   the additional diagnostics can still be part of the tvm/ir/diagnostic.h, and 
we do not need the additional context suffix that might cause confusion. 
   
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-16 Thread GitBox


tqchen commented on a change in pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#discussion_r506560441



##
File path: include/tvm/support/diagnostic_context.h
##
@@ -27,62 +27,20 @@
  * replace the existing errors.h.
  */
 
-#ifndef TVM_IR_DIAGNOSTIC_H_
-#define TVM_IR_DIAGNOSTIC_H_
+#ifndef TVM_IR_DIAGNOSTIC_CONTEXT_H_
+#define TVM_IR_DIAGNOSTIC_CONTEXT_H_
 
 #include 
-#include 
 #include 
-#include 
-#include 
-#include 
 
-#include 
+#include 
 #include 
-#include 
-#include 
 
 namespace tvm {
 
 using tvm::parser::SourceMap;
 using tvm::runtime::TypedPackedFunc;
 
-extern const char* kTVM_INTERNAL_ERROR_MESSAGE;
-
-#define ICHECK_INDENT "  "
-
-#define ICHECK_BINARY_OP(name, op, x, y)   \
-  if (dmlc::LogCheckError _check_err = dmlc::LogCheck##name(x, y)) \
-  dmlc::LogMessageFatal(__FILE__, __LINE__).stream()   \
-  << kTVM_INTERNAL_ERROR_MESSAGE << std::endl  \
-  << ICHECK_INDENT << "Check failed: " << #x " " #op " " #y << 
*(_check_err.str) << ": "
-
-#define ICHECK(x)\
-  if (!(x))  \
-  dmlc::LogMessageFatal(__FILE__, __LINE__).stream() \
-  << kTVM_INTERNAL_ERROR_MESSAGE << ICHECK_INDENT << "Check failed: " #x 
<< " == false: "
-
-#define ICHECK_LT(x, y) ICHECK_BINARY_OP(_LT, <, x, y)
-#define ICHECK_GT(x, y) ICHECK_BINARY_OP(_GT, >, x, y)
-#define ICHECK_LE(x, y) ICHECK_BINARY_OP(_LE, <=, x, y)
-#define ICHECK_GE(x, y) ICHECK_BINARY_OP(_GE, >=, x, y)
-#define ICHECK_EQ(x, y) ICHECK_BINARY_OP(_EQ, ==, x, y)
-#define ICHECK_NE(x, y) ICHECK_BINARY_OP(_NE, !=, x, y)
-#define ICHECK_NOTNULL(x)  
 \
-  ((x) == nullptr ? dmlc::LogMessageFatal(__FILE__, __LINE__).stream() 
 \
-<< kTVM_INTERNAL_ERROR_MESSAGE << __INDENT << "Check 
not null: " #x \
-<< ' ',
 \
-   (x) : (x))  // NOLINT(*)
-
-/*! \brief The diagnostic level, controls the printing of the message. */
-enum class DiagnosticLevel : int {
-  kBug = 10,
-  kError = 20,
-  kWarning = 30,
-  kNote = 40,
-  kHelp = 50,
-};
-

Review comment:
   diagnostic context should not be part of support, because right now it 
brings large amount of dep related to IR. keep it as in 
include/tvm/ir/diagnostic.h





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] tqchen commented on a change in pull request #6692: Refactor diagnostic to avoid circular dependencies

2020-10-16 Thread GitBox


tqchen commented on a change in pull request #6692:
URL: https://github.com/apache/incubator-tvm/pull/6692#discussion_r506559623



##
File path: include/tvm/support/diagnostic.h
##
@@ -0,0 +1,74 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ * \file diagnostic.h
+ * \brief A new diagnostic interface for TVM error reporting.
+ *
+ * A prototype of the new diagnostic reporting interface for TVM.
+ *
+ * Eventually we hope to promote this file to the top-level and
+ * replace the existing errors.h.
+ */
+
+#ifndef TVM_IR_DIAGNOSTIC_H_
+#define TVM_IR_DIAGNOSTIC_H_
+
+#include 
+
+namespace tvm {
+
+extern const char* kTVM_INTERNAL_ERROR_MESSAGE;

Review comment:
   Merge this file content with tvm/support/logging.h as previous content 
are already there

##
File path: include/tvm/support/diagnostic_context.h
##
@@ -27,62 +27,20 @@
  * replace the existing errors.h.
  */
 
-#ifndef TVM_IR_DIAGNOSTIC_H_
-#define TVM_IR_DIAGNOSTIC_H_
+#ifndef TVM_IR_DIAGNOSTIC_CONTEXT_H_
+#define TVM_IR_DIAGNOSTIC_CONTEXT_H_
 
 #include 
-#include 
 #include 
-#include 
-#include 
-#include 
 
-#include 
+#include 
 #include 
-#include 
-#include 
 
 namespace tvm {
 
 using tvm::parser::SourceMap;
 using tvm::runtime::TypedPackedFunc;
 
-extern const char* kTVM_INTERNAL_ERROR_MESSAGE;
-
-#define ICHECK_INDENT "  "
-
-#define ICHECK_BINARY_OP(name, op, x, y)   \
-  if (dmlc::LogCheckError _check_err = dmlc::LogCheck##name(x, y)) \
-  dmlc::LogMessageFatal(__FILE__, __LINE__).stream()   \
-  << kTVM_INTERNAL_ERROR_MESSAGE << std::endl  \
-  << ICHECK_INDENT << "Check failed: " << #x " " #op " " #y << 
*(_check_err.str) << ": "
-
-#define ICHECK(x)\
-  if (!(x))  \
-  dmlc::LogMessageFatal(__FILE__, __LINE__).stream() \
-  << kTVM_INTERNAL_ERROR_MESSAGE << ICHECK_INDENT << "Check failed: " #x 
<< " == false: "
-
-#define ICHECK_LT(x, y) ICHECK_BINARY_OP(_LT, <, x, y)
-#define ICHECK_GT(x, y) ICHECK_BINARY_OP(_GT, >, x, y)
-#define ICHECK_LE(x, y) ICHECK_BINARY_OP(_LE, <=, x, y)
-#define ICHECK_GE(x, y) ICHECK_BINARY_OP(_GE, >=, x, y)
-#define ICHECK_EQ(x, y) ICHECK_BINARY_OP(_EQ, ==, x, y)
-#define ICHECK_NE(x, y) ICHECK_BINARY_OP(_NE, !=, x, y)
-#define ICHECK_NOTNULL(x)  
 \
-  ((x) == nullptr ? dmlc::LogMessageFatal(__FILE__, __LINE__).stream() 
 \
-<< kTVM_INTERNAL_ERROR_MESSAGE << __INDENT << "Check 
not null: " #x \
-<< ' ',
 \
-   (x) : (x))  // NOLINT(*)
-
-/*! \brief The diagnostic level, controls the printing of the message. */
-enum class DiagnosticLevel : int {
-  kBug = 10,
-  kError = 20,
-  kWarning = 30,
-  kNote = 40,
-  kHelp = 50,
-};
-

Review comment:
   diagnostic context should not be part of support, because right now it 
brings large amount of dep and not part of ir. keep it as in 
include/tvm/ir/diagnostic.h





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on pull request #6695: [Relay] Change some passes to mix mode

2020-10-16 Thread GitBox


mbrookhart commented on pull request #6695:
URL: https://github.com/apache/incubator-tvm/pull/6695#issuecomment-710104990


   @jroesch or @tqchen, care to take a look and/or merge?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbrookhart commented on a change in pull request #6695: [Relay] Change some passes to mix mode

2020-10-16 Thread GitBox


mbrookhart commented on a change in pull request #6695:
URL: https://github.com/apache/incubator-tvm/pull/6695#discussion_r506527342



##
File path: src/relay/transforms/fold_constant.cc
##
@@ -118,7 +120,7 @@ class ConstantFolder : public ExprMutator {
 }
   }
 
-  Expr VisitExpr_(const CallNode* call) final {
+  Expr Rewrite_(const CallNode* call, const Expr& post) final {

Review comment:
   (nitpick) You could locally rename this argument from post to res, and 
then you wouldn't need most of the other changes in the function?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret commented on pull request #6697: [BYOC] Allow custom codegens to register their own constant updater

2020-10-16 Thread GitBox


mbaret commented on pull request #6697:
URL: https://github.com/apache/incubator-tvm/pull/6697#issuecomment-710020641


   cc @manupa-arm @zhiics @comaniac 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] mbaret opened a new pull request #6697: [BYOC] Allow custom codegens to register their own constant updater

2020-10-16 Thread GitBox


mbaret opened a new pull request #6697:
URL: https://github.com/apache/incubator-tvm/pull/6697


   Currently, all codegens using BYOC must make use of the default 
ConstantUpdater pass. However, certain codegens, like Ethos-N, don't want to 
store any constants in metadata module. This provides an interface (via a 
global) to register a custom constant updating method and assigns a 'null' 
updater for the Ethos-N codegen.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] hzfan commented on issue #6691: [Performance] Performance regression with int64 indices INDEX_DEFAULT_I64=ON (PR #6143)

2020-10-16 Thread GitBox


hzfan commented on issue #6691:
URL: https://github.com/apache/incubator-tvm/issues/6691#issuecomment-709956681


   Sure. I will look into that. Thanks @trevor-m 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] masahi merged pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

2020-10-16 Thread GitBox


masahi merged pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch main updated (4c4d3dc -> cf49e8b)

2020-10-16 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from 4c4d3dc  Fix tutorial broken by Docker build (#6694)
 add cf49e8b  [Torch, Quantization] Necessary workaround to prepare for 1.6 
update (#6602)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py   |  15 +-
 .../tvm/relay/frontend/pytorch_utils.py|  17 +-
 python/tvm/relay/frontend/qnn_torch.py | 186 ++---
 3 files changed, 134 insertions(+), 84 deletions(-)
 copy tests/scripts/task_python_ethosn_tests.sh => 
python/tvm/relay/frontend/pytorch_utils.py (76%)
 mode change 100755 => 100644



[GitHub] [incubator-tvm] masahi commented on pull request #6602: [Torch, Quantization] Necessary workaround to prepare for 1.6 update

2020-10-16 Thread GitBox


masahi commented on pull request #6602:
URL: https://github.com/apache/incubator-tvm/pull/6602#issuecomment-709954612


   Thanks @siju-samuel 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] sxjscience opened a new pull request #6696: [Frontent][Relay][WIP] Fix MXNet frontend to support BERT in GluonNLP V1

2020-10-16 Thread GitBox


sxjscience opened a new pull request #6696:
URL: https://github.com/apache/incubator-tvm/pull/6696


   Fix the MXNet 2.0 integration in relay. Tested the BERT and ALBERT model in 
the new GluonNLP v1 and has passed the test. Will later on add unittests in 
GluonNLP to ensure that most backbones can be run with the graph runtime.
   
   ```python
   import mxnet as mx
   import numpy as np
   import gluonnlp
   from gluonnlp.models import get_backbone
   import numpy.testing as npt
   
   mx.npx.set_np()
   
   model_cls, cfg, tokenizer, backbone_param_path, _ = 
get_backbone('google_albert_base_v2')
   
   model = model_cls.from_cfg(cfg)
   model.load_parameters(backbone_param_path)
   model.hybridize()
   
   
   batch_size = 1
   seq_length = 128
   token_ids = mx.np.random.randint(0, cfg.MODEL.vocab_size, (batch_size, 
seq_length), dtype=np.int32)
   token_types = mx.np.random.randint(0, 2, (batch_size, seq_length), 
dtype=np.int32)
   valid_length = mx.np.random.randint(seq_length // 2, seq_length, 
(batch_size,), dtype=np.int32)
   mx_out = model(token_ids, token_types, valid_length)
   
   import tvm
   from tvm import relay
   import tvm.contrib.graph_runtime as runtime
   
   shape_dict = {
   'data0': (batch_size, seq_length),
   'data1': (batch_size, seq_length),
   'data2': (batch_size,)
   }
   
   dtype_dict = {
   'data0': 'int32',
   'data1': 'int32',
   'data2': 'int32'
   }
   
   sym = model._cached_graph[1]
   
   params = {}
   for k, v in model.collect_params().items():
   params[v._var_name] = tvm.nd.array(v.data().asnumpy())
   mod, params = relay.frontend.from_mxnet(sym, shape=shape_dict, 
dtype=dtype_dict, arg_params=params)
   print(mod)
   # G4
   target = "cuda -model=t4"
   
   with relay.build_config(opt_level=3, required_pass=["FastMath"]):
   graph, lib, cparams = relay.build(mod, target, params=params)
   
   ctx = tvm.gpu()
   rt = runtime.create(graph, lib, ctx)
   rt.set_input(**cparams)
   rt.set_input(data0=token_ids, data1=token_types, data2=valid_length)
   rt.run()
   for i in range(rt.get_num_outputs()):
   out = rt.get_output(i)
   print(out.asnumpy())# verify the correctness
   npt.assert_allclose(out.asnumpy(), mx_out[i].asnumpy(), rtol=1e-3, 
atol=1e-2)
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org