[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551162129



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -624,12 +647,7 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-if build_func == "default":
-build_func = tar.tar
-elif build_func == "ndk":
-build_func = ndk.create_shared
-else:
-raise ValueError("Invalid build_func" + build_func)
+build_func = BuildFunc.build_func

Review comment:
   custom build func is a python callable object, and it has attribute and 
so on, we can not serialize it into args.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] cxcxcxcx commented on a change in pull request #7190: Makes sure g_last_error is null terminated.

2021-01-03 Thread GitBox


cxcxcxcx commented on a change in pull request #7190:
URL: https://github.com/apache/tvm/pull/7190#discussion_r551159043



##
File path: src/runtime/crt/common/crt_runtime_api.c
##
@@ -38,7 +38,10 @@
 
 static char g_last_error[1024];
 
-void TVMAPISetLastError(const char* msg) { strncpy(g_last_error, msg, 
sizeof(g_last_error)); }
+void TVMAPISetLastError(const char* msg) {
+  strncpy(g_last_error, msg, sizeof(g_last_error) - 1);
+  g_last_error[sizeof(g_last_error) - 1] = 0;

Review comment:
   Does "truncate msg to the size of g_last_error" mean mutating "msg"? 
That doesn't sound always doable. My understanding:
   
   1. Before the proposed change, when the error message is longer than 1024, 
g_last_error won't be null terminated. I believe this is what the warning is 
about.
   2. With the change, g_last_error will always be null-terminated.
   3. If we use `strcpy`, we need to strlen first. If the length is long, we 
need to do something special. I don't think it's trivial. It could be faster, 
but I wonder how much it matters, since errors are probably rare.
   4. snprintf may be an alternative. But probably not too different in the 
context.
   
   Given these, would you reconsider the change as is?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] liangfu commented on a change in pull request #7190: Makes sure g_last_error is null terminated.

2021-01-03 Thread GitBox


liangfu commented on a change in pull request #7190:
URL: https://github.com/apache/tvm/pull/7190#discussion_r551154025



##
File path: src/runtime/crt/common/crt_runtime_api.c
##
@@ -38,7 +38,10 @@
 
 static char g_last_error[1024];
 
-void TVMAPISetLastError(const char* msg) { strncpy(g_last_error, msg, 
sizeof(g_last_error)); }
+void TVMAPISetLastError(const char* msg) {
+  strncpy(g_last_error, msg, sizeof(g_last_error) - 1);
+  g_last_error[sizeof(g_last_error) - 1] = 0;

Review comment:
   I think no null-character is implicitly appended at the end of 
destination only if source is longer than num, otherwise, destination is padded 
with zeros until a total of num characters have been written to it. See 
[strncpy - C++ reference](http://www.cplusplus.com/reference/cstring/strncpy/).
   
   The error reported in the compiler meant "bound 1024 equals destination 
size". As a correction for this, I think it'll be better to copy msg directly 
to g_last_error:
   ```c
   strcpy(g_last_error, msg);
   ```





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


FrozenGene commented on pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#issuecomment-753811336


   @jcf94 could you have another one round of review?



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551153793



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -652,7 +652,6 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-assert any(build_func == name for name in BuildFunc.name)

Review comment:
   any is error, so I change it





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


FrozenGene commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551149875



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -652,7 +652,6 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-assert any(build_func == name for name in BuildFunc.name)

Review comment:
   it is removed by `black` tool automatically?





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


FrozenGene commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551141295



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -303,12 +312,26 @@ class LocalBuilder(ProgramBuilder):
 This is used in a wrapper of the multiprocessing.Process.join().
 n_parallel : int = multiprocessing.cpu_count()
 Number of threads used to build in parallel.
-build_func : str = 'default'
-The name of registered build function.
+build_func: callable or str = "default"
+If is 'default', use default build function
+If is 'ndk', use function for android ndk
+If is callable, use it as custom build function, expect lib_format 
field.
 """
 
 def __init__(self, timeout=15, n_parallel=multiprocessing.cpu_count(), 
build_func="default"):
-self.__init_handle_by_constructor__(_ffi_api.LocalBuilder, timeout, 
n_parallel, build_func)
+if build_func == "default":

Review comment:
   He should keep the state information of `build_func` between functions, 
so just `self.build_func` can  not meet the requirement because we will create 
one class instance.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551140443



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -303,12 +312,26 @@ class LocalBuilder(ProgramBuilder):
 This is used in a wrapper of the multiprocessing.Process.join().
 n_parallel : int = multiprocessing.cpu_count()
 Number of threads used to build in parallel.
-build_func : str = 'default'
-The name of registered build function.
+build_func: callable or str = "default"
+If is 'default', use default build function
+If is 'ndk', use function for android ndk
+If is callable, use it as custom build function, expect lib_format 
field.
 """
 
 def __init__(self, timeout=15, n_parallel=multiprocessing.cpu_count(), 
build_func="default"):
-self.__init_handle_by_constructor__(_ffi_api.LocalBuilder, timeout, 
n_parallel, build_func)
+if build_func == "default":

Review comment:
   when auto_scheduler.local_builder.build is called, the context can't get 
LocalBuilder class instance





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


FrozenGene commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551139727



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -624,12 +647,7 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-if build_func == "default":
-build_func = tar.tar
-elif build_func == "ndk":
-build_func = ndk.create_shared
-else:
-raise ValueError("Invalid build_func" + build_func)
+build_func = BuildFunc.build_func

Review comment:
   If this is a custom build func, we can not serialize it into args, 
because it is a callable object, not one str.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] FrozenGene commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


FrozenGene commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551139319



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -624,12 +647,7 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-if build_func == "default":
-build_func = tar.tar
-elif build_func == "ndk":
-build_func = ndk.create_shared
-else:
-raise ValueError("Invalid build_func" + build_func)

Review comment:
   we could add one check `assert any(build_func == name for name in  
BuildFunc.nam)`

##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -63,6 +63,15 @@
 # We use 1e10 instead of sys.float_info.max for better readability in log
 MAX_FLOAT = 1e10
 
+class BuildFunc:
+""" store build_func name and callable to class variable.

Review comment:
   No need of `to class variable`





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


jcf94 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551138247



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -624,12 +647,7 @@ def local_build_worker(args):
 The build result of this Builder thread.
 """
 inp, build_func, timeout, verbose = args
-if build_func == "default":
-build_func = tar.tar
-elif build_func == "ndk":
-build_func = ndk.create_shared
-else:
-raise ValueError("Invalid build_func" + build_func)
+build_func = BuildFunc.build_func

Review comment:
   Get `build_func` from the `args`, rather than get from the global 
`BuildFunc`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] jcf94 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


jcf94 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551137236



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -303,12 +312,26 @@ class LocalBuilder(ProgramBuilder):
 This is used in a wrapper of the multiprocessing.Process.join().
 n_parallel : int = multiprocessing.cpu_count()
 Number of threads used to build in parallel.
-build_func : str = 'default'
-The name of registered build function.
+build_func: callable or str = "default"
+If is 'default', use default build function
+If is 'ndk', use function for android ndk
+If is callable, use it as custom build function, expect lib_format 
field.
 """
 
 def __init__(self, timeout=15, n_parallel=multiprocessing.cpu_count(), 
build_func="default"):
-self.__init_handle_by_constructor__(_ffi_api.LocalBuilder, timeout, 
n_parallel, build_func)
+if build_func == "default":

Review comment:
   So this is designed to use the class static member?
   Why not just use a `self.build_func`.





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551135097



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -63,6 +63,9 @@
 # We use 1e10 instead of sys.float_info.max for better readability in log
 MAX_FLOAT = 1e10
 
+class CustomBuildFunc:

Review comment:
   done
   





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551133295



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -303,11 +306,16 @@ class LocalBuilder(ProgramBuilder):
 This is used in a wrapper of the multiprocessing.Process.join().
 n_parallel : int = multiprocessing.cpu_count()
 Number of threads used to build in parallel.
-build_func : str = 'default'
-The name of registered build function.
+build_func: callable or str

Review comment:
   I have added BuildFunc default name = "default" and default func 
callable = tar.tar





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] leowang1225 commented on a change in pull request #7185: [AutoScheduler] Add custom build function

2021-01-03 Thread GitBox


leowang1225 commented on a change in pull request #7185:
URL: https://github.com/apache/tvm/pull/7185#discussion_r551132967



##
File path: python/tvm/auto_scheduler/measure.py
##
@@ -628,6 +636,8 @@ def local_build_worker(args):
 build_func = tar.tar
 elif build_func == "ndk":
 build_func = ndk.create_shared
+elif build_func == "custom":

Review comment:
   I change all conditions to BuildFunc





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7195: [THRUST] Faster multi dimensional argsort by segmented sort

2021-01-03 Thread GitBox


masahi opened a new pull request #7195:
URL: https://github.com/apache/tvm/pull/7195


   Current implementation of thrust argsort, when given multi dimensional 
inputs to sort along the inner most axis, is very inefficient: it does `n_iter` 
calls to thrust sort. See
   
   
https://github.com/apache/tvm/blob/bad149ed8a555444d813537608ee5cea9e95e97e/src/runtime/contrib/thrust/thrust.cu#L50-L65
   
   When the outer dimension is large, the performance of thrust argsort is far 
from optimal. In particular, the thrust numbers in shown in 
https://github.com/apache/tvm/pull/7099 do not reflect the true performance 
thrust can achieve. 
   
   This PR replaces `n_iter` calls to thrust argsort with one segmented sort by 
key. Since thrust doesn't provide API to do sort by key, I used a neat 
back-to-back stable-sort-by-key trick explained in 
https://groups.google.com/forum/#!topic/thrust-users/BoLsxO6b4FY. My 
implementation is a bit more complicated because we need to do segmented sort 
**by key**, not just segmented sort. 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7149: Sparse segment sum sqrtn op

2021-01-03 Thread GitBox


codeislife99 commented on pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#issuecomment-753730025


   @tkonolige @mbrookhart Can I get a re-review on this PR ? I have added the 
TF Frontend code and some more documentation. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7149: Sparse segment sum sqrtn op

2021-01-03 Thread GitBox


codeislife99 commented on pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#issuecomment-753729768


   Context of these PRs: The goal of adding these sparse ops is to enable a 
customer to run their recommendation model which is currently getting split 
into multiple subgraphs because of our non-coverage of this op. 
   
   I had an offline discussion with the main reviewers but I will also try to 
summarize the conclusions from it and the comments here:
   1. New namespace : Further discussion will be had in a separate thread, 
after the current (this one included) sparse ops PRs are merged, since there 
are a few already existing sparse ops without the namespace and if a new 
namespace is necessary all the current + previous sparse ops will be put in it. 
   2. Documentation : More documentation has been added 
   3. High level Approach like XLA: Discussion in a separate thread 
   4. This operation is TF specific. 



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] masahi opened a new pull request #7194: [CUBLAS, CUDNN] Support dynamic batch size

2021-01-03 Thread GitBox


masahi opened a new pull request #7194:
URL: https://github.com/apache/tvm/pull/7194


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7149: Sparse segment sum sqrtn op

2021-01-03 Thread GitBox


codeislife99 commented on a change in pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#discussion_r551089298



##
File path: python/tvm/relay/op/transform.py
##
@@ -1320,3 +1320,55 @@ def adv_index(inputs):
 Output tensor.
 """
 return _make.adv_index(Tuple(inputs))
+
+
+def sparse_segment_sum(data, indices, segment_ids, num_segments=None):
+"""
+Compute the sparse segment sum on the indices over the segment_ids

Review comment:
   I will include a link to the TF documentation which will make it 
clearer. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on a change in pull request #7149: Sparse segment sum sqrtn op

2021-01-03 Thread GitBox


codeislife99 commented on a change in pull request #7149:
URL: https://github.com/apache/tvm/pull/7149#discussion_r551089181



##
File path: src/relay/op/tensor/transform.cc
##
@@ -1553,6 +1553,59 @@ RELAY_REGISTER_OP("meshgrid")
 .set_attr("FTVMCompute", MeshgridCompute)
 .set_attr("TOpPattern", kInjective);
 
+TVM_REGISTER_NODE_TYPE(SparseSegmentSumAttrs);
+
+bool SparseSegmentSumRel(const Array& types, int num_inputs, const 
Attrs& attrs,
+ const TypeReporter& reporter) {
+  // types: [data, indices, segment_ids, result]
+  ICHECK_EQ(types.size(), 4) << "SparseSegmentSumRel expects 4 types but 
provided " << types.size();
+  auto data = types[0].as();
+  auto indices = types[1].as();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);
+  Array new_data_shape;
+  new_data_shape.push_back(tvm::max(indices->shape[0], param->num_segments));
+  for (int i = 1; i < static_cast(data->shape.size()); ++i) {
+new_data_shape.push_back(data->shape[i]);
+  }
+  std::vector fields;
+  fields.push_back(TensorType(new_data_shape, data->dtype));
+  fields.push_back(TensorType(Array{1}, tvm::DataType::Int(32)));
+  reporter->Assign(types[3], TupleType(Array(fields)));
+  return true;
+}
+
+Array SparseSegmentSumCompute(const Attrs& attrs, const 
Array& inputs,
+  const Type& out_type) {
+  ICHECK_EQ(inputs.size(), 3) << "SparseSegmentSumCompute expects 3 input but 
provided "
+  << inputs.size();
+  const auto* param = attrs.as();
+  ICHECK(param != nullptr);
+  return {topi::SparseSegmentSum(inputs[0], inputs[1], inputs[2], 
param->num_segments)};
+}
+
+Expr MakeSparseSegmentSum(Expr data, Expr indices, Expr segment_ids, int 
num_segments) {
+  auto attrs = make_object();
+  attrs->num_segments = std::move(num_segments);
+  static const Op& op = Op::Get("sparse_segment_sum");
+  return Call(op, {data, indices, segment_ids}, Attrs(attrs), {});
+}
+
+TVM_REGISTER_GLOBAL("relay.op._make.sparse_segment_sum").set_body_typed(MakeSparseSegmentSum);
+
+RELAY_REGISTER_OP("sparse_segment_sum")
+.describe(R"code(Return sparse segment sum of the tensor given segments
+)code" TVM_ADD_FILELINE)
+.set_num_inputs(3)
+.set_attrs_type()
+.add_argument("data", "Tensor", "The first tensor")
+.add_argument("indices", "Tensor", "The second tensor")
+.add_argument("segment_ids", "Tensor", "The third tensor")

Review comment:
   Done. 





This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 commented on pull request #7193: Fix ICHECK_NOTNULL in logging.h

2021-01-03 Thread GitBox


codeislife99 commented on pull request #7193:
URL: https://github.com/apache/tvm/pull/7193#issuecomment-753719016


   @anijain2305 @trevor-m @zhiics Can I get a quick +1 on this ? 
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 opened a new pull request #7193: Fix ICHECK_NOTNULL in logging.h

2021-01-03 Thread GitBox


codeislife99 opened a new pull request #7193:
URL: https://github.com/apache/tvm/pull/7193


   Fix bug in logging.h



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] codeislife99 opened a new pull request #7192: Sparse ops all

2021-01-03 Thread GitBox


codeislife99 opened a new pull request #7192:
URL: https://github.com/apache/tvm/pull/7192


   Thanks for contributing to TVM!   Please refer to guideline 
https://tvm.apache.org/docs/contribute/ for useful information and tips. After 
the pull request is submitted, please request code reviews from 
[Reviewers](https://github.com/apache/incubator-tvm/blob/master/CONTRIBUTORS.md#reviewers)
 by @ them in the pull request thread.
   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] sxjscience commented on pull request #7191: [Frontend][MXNet] add _npi_subtract_scalar

2021-01-03 Thread GitBox


sxjscience commented on pull request #7191:
URL: https://github.com/apache/tvm/pull/7191#issuecomment-753645201


   Thanks!!



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] juannzou edited a comment on pull request #6012: [VTA] Move compiler related registry items to vta/build_module.py

2021-01-03 Thread GitBox


juannzou edited a comment on pull request #6012:
URL: https://github.com/apache/tvm/pull/6012#issuecomment-752647032


   Hi @lhf1997
   
   I had the same issue. Have you tried with [Pynq bitstream 
0.0.2](https://github.com/uwsampl/vta-distro/tree/master/bitstreams/pynq/0.0.2)?
 It worked for me! But then I got another error. I have opened new issue #7182 
on that. See also: 
https://discuss.tvm.apache.org/t/vta-test-fail-while-running-the-2d-convolution-testbench-on-pynq/8789
   
   Hope this helps.



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] juannzou commented on issue #7182: VTA test fail while running the 2D convolution testbench on Pynq

2021-01-03 Thread GitBox


juannzou commented on issue #7182:
URL: https://github.com/apache/tvm/issues/7182#issuecomment-753602648


   Hi @junrushao1994 
   
   I thought it is a bug, since I tried with different settings and still get 
errors.
   
   But I just posted it on the discuss forum, according to your suggestion. 
However, you seem to have restriction policy there for new users, in terms of 
how many links or embedded pictures they can include in their posts! So I 
referenced this post in the forum, for more details.
   
   Here is the link to the issue:
   
https://discuss.tvm.apache.org/t/vta-test-fail-while-running-the-2d-convolution-testbench-on-pynq/8789



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated: [Frontend][MXNet] add _npi_subtract_scalar (#7191)

2021-01-03 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 86a8504  [Frontend][MXNet] add _npi_subtract_scalar (#7191)
86a8504 is described below

commit 86a8504a7ccb956591ffd1a529f625df7d20b520
Author: insop 
AuthorDate: Sun Jan 3 02:46:55 2021 -0800

[Frontend][MXNet] add _npi_subtract_scalar (#7191)

* [Frontend][MXNet] add _npi_subtract_scalar

- add mxnet numpy operator, subtract
- https://github.com/apache/tvm/issues/7186
- 
https://mxnet.apache.org/versions/master/api/python/docs/api/np/generated/mxnet.np.subtract.html

* Fix python style using black
---
 3rdparty/vta-hw |  2 +-
 python/tvm/relay/frontend/mxnet.py  |  2 ++
 tests/python/frontend/mxnet/test_forward.py | 20 
 3 files changed, 19 insertions(+), 5 deletions(-)

diff --git a/3rdparty/vta-hw b/3rdparty/vta-hw
index 57db5a7..87ce9ac 16
--- a/3rdparty/vta-hw
+++ b/3rdparty/vta-hw
@@ -1 +1 @@
-Subproject commit 57db5a718c74a788c98120ebbe1230797be698c8
+Subproject commit 87ce9acfae550d1a487746e9d06c2e250076e54c
diff --git a/python/tvm/relay/frontend/mxnet.py 
b/python/tvm/relay/frontend/mxnet.py
index f2330c7..1085e90 100644
--- a/python/tvm/relay/frontend/mxnet.py
+++ b/python/tvm/relay/frontend/mxnet.py
@@ -2693,6 +2693,8 @@ _convert_map = {
 "_npi_multiply_scalar": _binop_scalar(_op.multiply),
 "_npi_add": _rename(_op.add),
 "_npi_add_scalar": _binop_scalar(_op.add),
+"_npi_subtract": _rename(_op.subtract),
+"_npi_subtract_scalar": _binop_scalar(_op.subtract),
 "_npi_where_rscalar": _mx_npi_where_rscalar,
 "_npi_less": _rename(_op.less),
 "_npi_less_equal": _mx_compare(_op.less_equal, _rename),
diff --git a/tests/python/frontend/mxnet/test_forward.py 
b/tests/python/frontend/mxnet/test_forward.py
index f076a27..d3be8c0 100644
--- a/tests/python/frontend/mxnet/test_forward.py
+++ b/tests/python/frontend/mxnet/test_forward.py
@@ -2062,8 +2062,14 @@ def test_forward_npx_reshape(data_shape, out_shape, 
dtype, target, reverse, ctx,
 @tvm.testing.parametrize_targets
 @pytest.mark.parametrize("kind", ["graph", "vm", "debug"])
 def test_forward_npi_binary(data_shape, dtype, target, ctx, kind):
-ref_ops = [mx.np.power, mx.np.multiply, mx.np.add, mx.np.less]
-mx_ops = [mx.sym.np.power, mx.sym.np.multiply, mx.sym.np.add, 
mx.sym.np.less]
+ref_ops = [mx.np.power, mx.np.multiply, mx.np.add, mx.np.subtract, 
mx.np.less]
+mx_ops = [
+mx.sym.np.power,
+mx.sym.np.multiply,
+mx.sym.np.add,
+mx.sym.np.subtract,
+mx.sym.np.less,
+]
 for i in range(len(ref_ops)):
 ref_op = ref_ops[i]
 mx_op = mx_ops[i]
@@ -2092,8 +2098,14 @@ def test_forward_npi_binary(data_shape, dtype, target, 
ctx, kind):
 @pytest.mark.parametrize("scalar", [1.0, 2.0, 3.0, 4.0])
 @pytest.mark.parametrize("kind", ["graph", "vm", "debug"])
 def test_forward_npi_binary_scalar(data_shape, dtype, scalar, target, ctx, 
kind):
-ref_ops = [mx.np.power, mx.np.multiply, mx.np.add, mx.np.true_divide]
-mx_ops = [mx.sym.np.power, mx.sym.np.multiply, mx.sym.np.add, 
mx.sym.np.true_divide]
+ref_ops = [mx.np.power, mx.np.multiply, mx.np.add, mx.np.subtract, 
mx.np.true_divide]
+mx_ops = [
+mx.sym.np.power,
+mx.sym.np.multiply,
+mx.sym.np.add,
+mx.sym.np.subtract,
+mx.sym.np.true_divide,
+]
 for i in range(len(ref_ops)):
 ref_op = ref_ops[i]
 mx_op = mx_ops[i]



[GitHub] [tvm] masahi merged pull request #7191: [Frontend][MXNet] add _npi_subtract_scalar

2021-01-03 Thread GitBox


masahi merged pull request #7191:
URL: https://github.com/apache/tvm/pull/7191


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[tvm] branch main updated (6258fae -> 76a9825)

2021-01-03 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a change to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git.


from 6258fae  [Fix] Tensor core type issue for dense (#7187)
 add 76a9825  Remove seemingly invalid SoftPlus (#7189)

No new revisions were added by this update.

Summary of changes:
 python/tvm/relay/frontend/onnx.py  | 9 -
 tests/python/frontend/onnx/test_forward.py | 1 -
 2 files changed, 10 deletions(-)



[GitHub] [tvm] masahi merged pull request #7189: [FRONTEND][ONNX] Remove seemingly invalid SoftPlus

2021-01-03 Thread GitBox


masahi merged pull request #7189:
URL: https://github.com/apache/tvm/pull/7189


   



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org




[GitHub] [tvm] junrushao1994 commented on pull request #7191: [Frontend][MXNet] add _npi_subtract_scalar

2021-01-03 Thread GitBox


junrushao1994 commented on pull request #7191:
URL: https://github.com/apache/tvm/pull/7191#issuecomment-753592836


   @sxjscience @eric-haibin-lin Please take a look and let's get it merged



This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org