[GitHub] [incubator-tvm] abergeron opened a new issue #5704: tests/python/integration/test_ewise.py::test_add fail on macOS with Metal

2020-05-30 Thread GitBox


abergeron opened a new issue #5704:
URL: https://github.com/apache/incubator-tvm/issues/5704


   To reproduce, build on macOS X with metal enabled (I use min-version=10.11, 
if that makes a difference), install the python modules and run the test with 
pytest.
   
   The error seems to complain about the "float44" dtype which I think doesn't 
exist.
   
   ```
   [23:06:17] 
/Users/anakha/miniconda/conda-bld/tvm-libs_1590891729956/work/src/runtime/metal/metal_device_api.mm:131:
 Intializing Metal device 0, name=AMD Radeon R9 M290
   ___ test_add 
___
   
   def test_add():
   def run(dtype):
   # graph
   n = te.size_var('n')
   A = te.placeholder((n,), name='A', dtype=dtype)
   B = te.placeholder((n,), name='B', dtype=dtype)
   bias = te.var("bias", dtype=dtype)
   scale = te.var("scale", dtype=dtype)
   C = te.compute(A.shape, lambda *i: A(*i) + B(*i), name='C')
   # schedule
   s = te.create_schedule(C.op)
   # create iter var and assign them tags.
   num_thread = 16
   bx, x = s[C].split(C.op.axis[0], factor=num_thread*4)
   tx, x = s[C].split(x, nparts=num_thread)
   _, x = s[C].split(x, factor=4)
   s[C].bind(bx, te.thread_axis("blockIdx.x"))
   s[C].bind(tx, te.thread_axis("threadIdx.x"))
   s[C].vectorize(x)
   
   # one line to build the function.
   def check_device(device):
   ctx = tvm.context(device, 0)
   if not ctx.exist:
   print("skip because %s is not enabled.." % device)
   return
   fadd = tvm.build(s, [A, B, C],
device,
name="myadd")
   
   # launch the kernel.
   n = 1024
   a = tvm.nd.array((np.random.uniform(size=n) * 
256).astype(A.dtype), ctx)
   b = tvm.nd.array((np.random.uniform(size=n) * 
256).astype(B.dtype), ctx)
   c = tvm.nd.array(np.zeros(n, dtype=C.dtype), ctx)
   ftimer = fadd.time_evaluator(fadd.entry_name, ctx, number=1)
   tcost = ftimer(a, b, c).mean
   tvm.testing.assert_allclose(
   c.asnumpy(), a.asnumpy() + b.asnumpy(), rtol=1e-6)
   
   check_device("opencl")
   check_device("cuda")
   if dtype == "float32":
   check_device("metal")
   check_device("vulkan")
   
   >   run("float32")
   
   tests/python/integration/test_ewise.py:254: 
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   tests/python/integration/test_ewise.py:251: in run
   check_device("metal")
   tests/python/integration/test_ewise.py:244: in check_device
   tcost = ftimer(a, b, c).mean
   
../_test_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_p/lib/python3.7/site-packages/tvm/runtime/module.py:215:
 in evaluator
   blob = feval(*args)
   tvm/_ffi/_cython/./packed_func.pxi:321: in 
tvm._ffi._cy3.core.PackedFuncBase.__call__
   ???
   tvm/_ffi/_cython/./packed_func.pxi:256: in tvm._ffi._cy3.core.FuncCall
   ???
   tvm/_ffi/_cython/./packed_func.pxi:245: in tvm._ffi._cy3.core.FuncCall3
   ???
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   
   >   ???
   E   tvm._ffi.base.TVMError: Traceback (most recent call last):
   E [bt] (3) 4   libtvm.dylib0x00011d8f4f48 
TVMFuncCall + 72
   E [bt] (2) 3   libtvm.dylib0x00011d93c687 
std::__1::__function::__func, void (tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 359
   E [bt] (1) 2   libtvm.dylib0x00011d900103 
std::__1::__function::__func 
const&)::$_0, std::__1::allocator const&)::$_0>, void 
(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)>::operator()(tvm::runtime::TVMArgs&&, 
tvm::runtime::TVMRetValue*&&) + 259
   E [bt] (0) 1   libtvm.dylib0x00011ceb84bf 
dmlc::LogMessageFatal::~LogMessageFatal() + 111
   E [bt] (7) 8   ??? 0x001a26946585 
0x0 + 112316409221
   E [bt] (6) 7   libtvm.dylib0x00011d8f4be4 
TVMBackendGetFuncFromEnv + 164
   E [bt] (5) 6   libtvm.dylib0x00011d901d77 
tvm::runtime::ModuleNode::GetFuncFromEnv(std::__1::basic_string, std::__1::allocator > const&) + 231
   E [bt] (4) 5   

[GitHub] [incubator-tvm-vta] liangfu commented on pull request #8: [Hardware][Xilinx] explicitly specify acc dep distance to avoid hidden pitfall

2020-05-30 Thread GitBox


liangfu commented on pull request #8:
URL: https://github.com/apache/incubator-tvm-vta/pull/8#issuecomment-636420830


   I would agree with 
   
   > derive the value from the VTA target (i.e. FPGA type) in pkg_config
   
   , and avoid requiring a user to specify the `ACC_DEP_DISTANCE` parameter.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] comaniac opened a new pull request #5703: [PatternLang] Simplify Pattern API Implementations

2020-05-30 Thread GitBox


comaniac opened a new pull request #5703:
URL: https://github.com/apache/incubator-tvm/pull/5703


   For each pattern node, we defined `XXPattern` Python class for the FFI 
connection. We also define a user-friendly API such as `is_op` to have better 
user experience. Since `is_XX` APIs are just wrappers to the corresponding 
`XXPattern` class, this PR simplifies their implementations by turning the APIs 
to function aliases.
   
   In addition, this PR also makes the following changes:
   
   1. Add `is_constant`, `is_tuple`, and `is_tuple_get_item` APIs.
   2. Rename `is_input` to `is_var` to against `is_constant`.
   
   @mbrookhart I'm not quite sure if the second change makes sense to you or 
not so I intentionally separate it to the second commit. If you prefer 
`is_input`, I could just simply revert it.
   
   Also cc @masahi 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on pull request #5684: [AutoTVM][TOPI] Fix bifrost spatial packing conv2d auto tune

2020-05-30 Thread GitBox


kevinthesun commented on pull request #5684:
URL: https://github.com/apache/incubator-tvm/pull/5684#issuecomment-636407068


   Also can you fix winograd kernel replacement?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #5684: [AutoTVM][TOPI] Fix bifrost spatial packing conv2d auto tune

2020-05-30 Thread GitBox


kevinthesun commented on a change in pull request #5684:
URL: https://github.com/apache/incubator-tvm/pull/5684#discussion_r432900758



##
File path: topi/python/topi/arm_cpu/conv2d_spatial_pack.py
##
@@ -267,9 +270,13 @@ def conv2d_spatial_pack_nhwc(cfg, data, kernel, strides, 
padding, dilation, out_
 data_vec = te.compute(dvshape, lambda n, oho, owo, ohi, owi, ic:
   
data_pad[n][oho*OHI*HSTR+ohi][owo*OWI*WSTR+owi][ic],
   name='data_vec')
-kernel_vec = te.compute(kvshape, lambda oco, kh, kw, ic, oci: \
-kernel[kh][kw][ic][oco*OCI+oci],
-name='kernel_vec')
+
+if autotvm.GLOBAL_SCOPE.in_tuning:

Review comment:
   Change schedule for arm_cpu conv2d as well?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch master updated (c55ed37 -> 55aefc2)

2020-05-30 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


from c55ed37  [REFACTOR][RELAY] Replace build_config with PassContext 
(#5698)
 add 55aefc2  [PYTORCH]floor_divide support for squeezenet (#5702)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/pytorch.py | 1 +
 1 file changed, 1 insertion(+)



[GitHub] [incubator-tvm] masahi merged pull request #5702: [PYTORCH]floor_divide support for squeezenet

2020-05-30 Thread GitBox


masahi merged pull request #5702:
URL: https://github.com/apache/incubator-tvm/pull/5702


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yongwww commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-30 Thread GitBox


yongwww commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432886819



##
File path: python/tvm/relay/op/_transform.py
##
@@ -99,8 +99,80 @@ def _arange_shape_func(start, stop, step):
 
 @_reg.register_shape_func("arange", True)
 def arange_shape_func(attrs, inputs, _):
+"""
+Shape func for arange
+"""
 return [_arange_shape_func(*inputs)]
 
+@script
+def _strided_slice_shape_func_input_data(data, begin, end, strides,
+ slice_mode):
+ndim = len(data.shape)
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = 0
+cend = data.shape[i]
+cstride = 1
+if strides.shape[0] > i:
+cstride = strides[i]
+if begin.shape[0] > i:
+cbegin = begin[i]
+if end.shape[0] <= i:
+cend = data.shape[i]
+elif slice_mode != 0:
+if end[i] < 0:
+cend = data.shape[i]
+elif cstride < 0:
+cend = cbegin - end[i]
+else:
+cend = cbegin + end[i]
+else:
+cend = end[i]
+assert cstride != 0, "Strides can't be zero."
+out[i] = int64(ceil_div((int64(cend) - int64(cbegin)), int64(cstride)))
+return out
+
+@script
+def _strided_slice_shape_func_input_shape(data_shape, begin, end, strides, 
slice_mode):
+ndim = data_shape.shape[0]
+assert ndim == 2, "not correct"
+out = output_tensor((ndim,), "int64")
+for i in const_range(ndim):
+cbegin = int64(0)
+cend = int64(data_shape[i])
+cstride = int64(1)
+if len(strides) > i:
+cstride = int64(strides[i])
+if len(begin) > i:
+cbegin = int64(begin[i])
+if len(end) <= i:
+cend = int64(data_shape[i])
+elif slice_mode != 0:
+if end[i] < 0:
+cend = int64(data_shape[i])
+elif cstride < 0:

Review comment:
   yes, updated





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] yongwww commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-05-30 Thread GitBox


yongwww commented on a change in pull request #4312:
URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r432886691



##
File path: python/tvm/relay/frontend/tensorflow.py
##
@@ -614,6 +614,52 @@ def _impl(inputs, attr, params, mod):
 return out
 return _impl
 
+def _nms():
+def _impl(inputs, attr, params, mod):
+# Get parameter values
+max_output_size = 
int(np.atleast_1d(inputs[2].data.asnumpy().astype("int64"))[0])
+iou_threshold = np.atleast_1d(inputs[3].data.asnumpy())[0]
+# score_threshold was introduced from V3
+score_threshold = np.atleast_1d(inputs[4].data.asnumpy())[0] if 
len(inputs) > 4 else 0.0
+
+# Generate data with shape (1, num_anchors, 5)
+scores = AttrCvt(op_name="expand_dims",
+ ignores=['T_threshold'],
+ extras={'axis': -1, 'num_newaxis': 1})([inputs[1]], 
attr)
+data = get_relay_op('concatenate')([scores, inputs[0]], -1)
+data = get_relay_op('expand_dims')(data, 0, 1)
+
+# reason why using get_valid_counts is for inference performance
+ct, data, indices = get_relay_op('get_valid_counts')(data,
+ 
score_threshold=score_threshold,
+ id_index=-1,
+ score_index=0)
+# TensorFlow NMS doesn't have parameter top_k
+top_k = -1
+# TF doesn't have class id for nms input
+score_index = 0
+nms_ret = get_relay_op('non_max_suppression')(data=data,
+  valid_count=ct,
+  indices=indices,
+  
max_output_size=max_output_size,
+  
iou_threshold=iou_threshold,
+  force_suppress=True,
+  top_k=top_k,
+  coord_start=1,
+  score_index=score_index,
+  id_index=-1,
+  return_indices=True,
+  invalid_to_bottom=False)
+
+# squeeze it, TF NMS is not batched
+end = get_relay_op("squeeze")(nms_ret[1], axis=[1])
+data_slice = get_relay_op("squeeze")(nms_ret[0], axis=[0])
+
+# slice to get the dynamic result
+ret = get_relay_op("strided_slice")(data_slice, _expr.const([0]), end, 
_expr.const([1]))
+return ret
+return _impl

Review comment:
   Updated. Also used slice_mode for tf `Slice`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] majiang31312 commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-05-30 Thread GitBox


majiang31312 commented on issue #5686:
URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-636342592


   I'm new to TVM, but I will have a try :) 
   Thanks! @tqchen 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5702: [PYTORCH]floor_divide support for squeezenet

2020-05-30 Thread GitBox


siju-samuel opened a new pull request #5702:
URL: https://github.com/apache/incubator-tvm/pull/5702


   `aten::floor_divide` support.
   https://github.com/apache/incubator-tvm/issues/5133#issuecomment-636330705
   
   Testcase, im not able to simulate floor_divide. 
   @masahi Please help to review this.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] power0341 commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-05-30 Thread GitBox


power0341 commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-636336567


   @siju-samuel 
   thanks a lot, it works, good job



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] siju-samuel commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-05-30 Thread GitBox


siju-samuel commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-636334501


   @power0341 
   Please apply the below patch and let me know. 
   
   ```
   diff --git a/python/tvm/relay/frontend/pytorch.py 
b/python/tvm/relay/frontend/pytorch.py
   index f68affd82..f2e24a128 100644
   --- a/python/tvm/relay/frontend/pytorch.py
   +++ b/python/tvm/relay/frontend/pytorch.py
   @@ -1760,6 +1760,7 @@ def _get_convert_map(prelude):
"aten::arange"  : _arange(),
"aten::div" : _elemwise("divide"),
"aten::div_": _elemwise("divide"),
   +"aten::floor_divide": _elemwise("floor_divide"),
"aten::addcdiv" : _addcdiv(),
"aten::addcmul" : _addcmul(),
"aten::ones": _ones(),
   
   ```



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] power0341 commented on issue #5133: [Torch] A list of missing op conversion in need of help

2020-05-30 Thread GitBox


power0341 commented on issue #5133:
URL: https://github.com/apache/incubator-tvm/issues/5133#issuecomment-636330705


   > NotImplementedError: The following operators are not implemented: 
['aten::floor_divide']
   
   can we have this guy 'aten::floor_divide' as well? it's required by 
shufflenetV2
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] srkreddy1238 commented on a change in pull request #5695: fix small bug about dense_grad

2020-05-30 Thread GitBox


srkreddy1238 commented on a change in pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695#discussion_r432831890



##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -472,8 +472,8 @@ def bias_add_grad(orig, grad):
 def dense_grad(orig, grad):
 """Returns [grad' @ weight, data @ grad']"""
 data, weight = orig.args
-return [collapse_sum_like(transpose(grad) * weight, data),
-collapse_sum_like(data * transpose(grad), weight)]
+return [collapse_sum_like(_nn.dense(grad, transpose(weight)), data),

Review comment:
   In above example of data (5, 4) and weight (3, 4) implies a dense with 4 
inputs and yielding 3 outputs each for 5 units. Hence 5 here is the batch when 
we apply to a network.
   
   We very well support units/batches. Can you share details on the error you 
got while units is added as arg ?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] handar423 commented on a change in pull request #5695: fix small bug about dense_grad

2020-05-30 Thread GitBox


handar423 commented on a change in pull request #5695:
URL: https://github.com/apache/incubator-tvm/pull/5695#discussion_r432826485



##
File path: python/tvm/relay/op/_tensor_grad.py
##
@@ -472,8 +472,8 @@ def bias_add_grad(orig, grad):
 def dense_grad(orig, grad):
 """Returns [grad' @ weight, data @ grad']"""
 data, weight = orig.args
-return [collapse_sum_like(transpose(grad) * weight, data),
-collapse_sum_like(data * transpose(grad), weight)]
+return [collapse_sum_like(_nn.dense(grad, transpose(weight)), data),

Review comment:
   Thank you for your response!
   After correcting it, I found that both x86 and CUDA only support dense 
without batching, I tried testing with arm-cpu(mobile) but came into 
[python/tvm/relay/op/strategy/x86.py](https://github.com/apache/incubator-tvm/blob/master/python/tvm/relay/op/strategy/x86.py)
 and failed as well, so could you please tell me how to run dense with 
batching? 
   Thanks again!





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] FrozenGene commented on issue #5215: [AutoTVM] AutoTVM incorrect measurement

2020-05-30 Thread GitBox


FrozenGene commented on issue #5215:
URL: https://github.com/apache/incubator-tvm/issues/5215#issuecomment-636300190


   > @FrozenGene In #5200 we discussed another source of autotvm inaccurate 
measurement due to empty input tensor. Do we have a timeline to fix that?
   
   @kevinthesun FYI. We are rebasing our code based on the laster master. After 
completing it, this fix will be brought in.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[incubator-tvm] branch rust-tvm created (now a44a379)

2020-05-30 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a change to branch rust-tvm
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git.


  at a44a379  Refactor anyhow out of the rt layer

This branch includes the following new commits:

 new 4cf2dbc  Add tvm crate
 new a44a379  Refactor anyhow out of the rt layer

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.




[incubator-tvm] 01/02: Add tvm crate

2020-05-30 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch rust-tvm
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit 4cf2dbcf9b058e4d10f332a3fe0385d9387929fc
Author: Jared Roesch 
AuthorDate: Thu May 28 02:08:14 2020 -0700

Add tvm crate
---
 rust/tvm/.gitignore  |   7 +
 rust/tvm/.travis.yml |  22 +++
 rust/tvm/Cargo.toml  |  45 +
 rust/tvm/README.md   | 235 +++
 rust/tvm/examples/resnet/Cargo.toml  |  29 
 rust/tvm/examples/resnet/README.md   |  45 +
 rust/tvm/examples/resnet/build.rs|  42 +
 rust/tvm/examples/resnet/src/build_resnet.py | 134 +++
 rust/tvm/examples/resnet/src/main.rs | 160 ++
 rust/tvm/src/ir/array.rs |  70 
 rust/tvm/src/ir/mod.rs   |  17 ++
 rust/tvm/src/ir/relay/mod.rs | 232 ++
 rust/tvm/src/lib.rs  |  60 +++
 rust/tvm/src/runtime/mod.rs  |   1 +
 rust/tvm/src/transform.rs|  41 +
 rust/tvm/tests/basics/.gitignore |   7 +
 rust/tvm/tests/basics/Cargo.toml |  32 
 rust/tvm/tests/basics/build.rs   |  46 ++
 rust/tvm/tests/basics/src/main.rs|  55 +++
 rust/tvm/tests/basics/src/tvm_add.py |  50 ++
 rust/tvm/tests/callback/Cargo.toml   |  26 +++
 rust/tvm/tests/callback/src/bin/array.rs |  72 
 rust/tvm/tests/callback/src/bin/error.rs |  56 +++
 rust/tvm/tests/callback/src/bin/float.rs |  50 ++
 rust/tvm/tests/callback/src/bin/int.rs   |  49 ++
 rust/tvm/tests/callback/src/bin/string.rs|  54 ++
 rust/tvm/tests/test_ir.rs|  37 +
 27 files changed, 1674 insertions(+)

diff --git a/rust/tvm/.gitignore b/rust/tvm/.gitignore
new file mode 100644
index 000..2430329
--- /dev/null
+++ b/rust/tvm/.gitignore
@@ -0,0 +1,7 @@
+target
+**/*.rs.bk
+Cargo.lock
+/tests/basics/add_*
+/examples/resnet/deploy_*
+/examples/resnet/*.png
+/examples/resnet/synset.*
diff --git a/rust/tvm/.travis.yml b/rust/tvm/.travis.yml
new file mode 100644
index 000..e963b7c
--- /dev/null
+++ b/rust/tvm/.travis.yml
@@ -0,0 +1,22 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+language: rust
+rust:
+  - nightly
+matrix:
+  fast_finish: true
diff --git a/rust/tvm/Cargo.toml b/rust/tvm/Cargo.toml
new file mode 100644
index 000..4cbb619
--- /dev/null
+++ b/rust/tvm/Cargo.toml
@@ -0,0 +1,45 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+[package]
+name = "tvm"
+version = "0.1.0"
+license = "Apache-2.0"
+description = "Rust frontend support for TVM"
+repository = "https://github.com/apache/incubator-tvm;
+homepage = "https://github.com/apache/incubator-tvm;
+readme = "README.md"
+keywords = ["rust", "tvm"]
+categories = ["api-bindings", "science"]
+authors = ["TVM Contributors"]
+edition = "2018"
+
+[dependencies]
+thiserror = "^1.0"
+anyhow = "^1.0"
+lazy_static = "1.1"
+ndarray = "0.12"
+num-traits = "0.2"
+tvm-rt = { version = "0.1", path = "../tvm-rt/" }
+tvm-sys = { version = "0.1", path = "../tvm-sys/" }
+tvm-macros = { version = "*", path = "../macros/" }
+paste = "0.1"
+mashup = "0.1"
+once_cell = "^1.3.1"
+
+[features]
+blas = ["ndarray/blas"]
diff --git 

[incubator-tvm] 02/02: Refactor anyhow out of the rt layer

2020-05-30 Thread jroesch
This is an automated email from the ASF dual-hosted git repository.

jroesch pushed a commit to branch rust-tvm
in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git

commit a44a379bb3b3f4fab505dce3520eeb97f230ac23
Author: Jared Roesch 
AuthorDate: Sat May 30 01:07:46 2020 -0700

Refactor anyhow out of the rt layer
---
 rust/Cargo.toml  |  3 +-
 rust/macros/src/object.rs|  8 ++---
 rust/tvm-rt/src/errors.rs| 36 
 rust/tvm-rt/src/function.rs  | 66 +---
 rust/tvm-rt/src/lib.rs   |  5 +--
 rust/tvm-rt/src/ndarray.rs   | 66 +++-
 rust/tvm-rt/src/object/mod.rs| 17 --
 rust/tvm-rt/src/object/object_ptr.rs | 30 +---
 rust/tvm-rt/src/to_function.rs   | 37 ++--
 rust/tvm-rt/src/value.rs |  5 ++-
 rust/tvm/src/ir/array.rs | 55 +-
 rust/tvm/src/lib.rs  | 10 +-
 rust/tvm/src/transform.rs|  2 +-
 13 files changed, 211 insertions(+), 129 deletions(-)

diff --git a/rust/Cargo.toml b/rust/Cargo.toml
index 6d3481b..e107104 100644
--- a/rust/Cargo.toml
+++ b/rust/Cargo.toml
@@ -29,5 +29,6 @@ members = [
"frontend/tests/callback",
"frontend/examples/resnet",
 "tvm-sys",
-   "tvm-rt"
+   "tvm-rt",
+   "tvm",
 ]
diff --git a/rust/macros/src/object.rs b/rust/macros/src/object.rs
index 96a86dd..670d326 100644
--- a/rust/macros/src/object.rs
+++ b/rust/macros/src/object.rs
@@ -89,12 +89,12 @@ pub fn macro_impl(input: proc_macro::TokenStream) -> 
TokenStream {
 }
 
 impl std::convert::TryFrom for #ref_id {
-type Error = ::anyhow::Error;
+type Error = tvm_rt::Error;
 
 fn try_from(ret_val: tvm_rt::RetValue) -> Result<#ref_id, 
Self::Error> {
 use std::convert::TryInto;
 let oref: ObjectRef = ret_val.try_into()?;
-let ptr = oref.0.ok_or(anyhow::anyhow!("null ptr"))?;
+let ptr = oref.0.ok_or(tvm_rt::Error::Null)?;
 let ptr = ptr.downcast::<#payload_id>()?;
 Ok(#ref_id(Some(ptr)))
 }
@@ -122,7 +122,7 @@ pub fn macro_impl(input: proc_macro::TokenStream) -> 
TokenStream {
 }
 
 impl<'a> std::convert::TryFrom> for #ref_id {
-type Error = anyhow::Error;
+type Error = tvm_rt::Error;
 
 fn try_from(arg_value: tvm_rt::ArgValue<'a>) -> Result<#ref_id, 
Self::Error> {
 use std::convert::TryInto;
@@ -132,7 +132,7 @@ pub fn macro_impl(input: proc_macro::TokenStream) -> 
TokenStream {
 }
 
 impl<'a> std::convert::TryFrom<_rt::ArgValue<'a>> for #ref_id {
-type Error = anyhow::Error;
+type Error = tvm_rt::Error;
 
 fn try_from(arg_value: _rt::ArgValue<'a>) -> Result<#ref_id, 
Self::Error> {
 use std::convert::TryInto;
diff --git a/rust/tvm-rt/src/errors.rs b/rust/tvm-rt/src/errors.rs
index 77dbba7..41e873f 100644
--- a/rust/tvm-rt/src/errors.rs
+++ b/rust/tvm-rt/src/errors.rs
@@ -17,13 +17,10 @@
  * under the License.
  */
 
+use crate::DataType;
 use thiserror::Error;
 
 #[derive(Debug, Error)]
-#[error("Cannot convert from an empty array.")]
-pub struct EmptyArrayError;
-
-#[derive(Debug, Error)]
 #[error("Handle `{name}` is null.")]
 pub struct NullHandleError {
 pub name: String,
@@ -41,5 +38,32 @@ pub struct TypeMismatchError {
 }
 
 #[derive(Debug, Error)]
-#[error("Missing NDArray shape.")]
-pub struct MissingShapeError;
+pub enum NDArrayError {
+#[error("Missing NDArray shape.")]
+MissingShape,
+#[error("Cannot convert from an empty array.")]
+EmptyArray,
+#[error("Invalid datatype when attempting to convert ndarray.")]
+InvalidDatatype(#[from] tvm_sys::datatype::ParseDataTypeError),
+#[error("a shape error occurred in the Rust ndarray library")]
+ShapeError(#[from] ndarray::ShapeError),
+#[error("Expected type `{expected}` but found `{actual}`")]
+DataTypeMismatch { expected: DataType, actual: DataType }
+}
+
+#[derive(Debug, Error)]
+pub enum Error {
+#[error("{0}")]
+Downcast(#[from] tvm_sys::errors::ValueDowncastError),
+#[error("raw pointer passed across boundary was null")]
+Null,
+}
+
+impl Error {
+pub fn downcast(actual_type: String, expected_type: &'static str) -> Error 
{
+Self::Downcast(tvm_sys::errors::ValueDowncastError {
+actual_type,
+expected_type,
+})
+}
+}
diff --git a/rust/tvm-rt/src/function.rs b/rust/tvm-rt/src/function.rs
index 2a5f446..17f5f6e 100644
--- a/rust/tvm-rt/src/function.rs
+++ b/rust/tvm-rt/src/function.rs
@@ -33,12 +33,14 @@ use std::{
 ptr, slice, str,
 sync::Mutex,
 };
-
+use std::convert::{TryFrom};
 use anyhow::Result;
 use lazy_static::lazy_static;
 
 pub use 

[GitHub] [incubator-tvm] FrozenGene commented on issue #5038: [RFC] Module based Model Runtime Interface

2020-05-30 Thread GitBox


FrozenGene commented on issue #5038:
URL: https://github.com/apache/incubator-tvm/issues/5038#issuecomment-636294461


   > @FrozenGene can we follow up on this?
   
   Hi, @tqchen I will start to work on it from next Monday! Sorry for working 
on it lately because of other things.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [incubator-tvm] cchung100m commented on a change in pull request #5684: [AutoTVM][TOPI] Fix bifrost spatial packing conv2d auto tune

2020-05-30 Thread GitBox


cchung100m commented on a change in pull request #5684:
URL: https://github.com/apache/incubator-tvm/pull/5684#discussion_r432817217



##
File path: topi/python/topi/bifrost/conv2d.py
##
@@ -142,13 +142,14 @@ def _schedule_spatial_pack(cfg, s, output, conv, 
data_vec, kernel_vec):
 s[data_vec].unroll(vw)
 
 if isinstance(kernel_vec.op, tvm.te.ComputeOp) and kernel_vec.name == 
'kernel_vec':
+co, ci, kh, kw, vc = s[kernel_vec].op.axis

Review comment:
   Hi @kevinthesun 
   Thanks for the review, I move the placeholder replacement to compute.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org