[GitHub] [incubator-tvm] masahi commented on pull request #5729: [ONNX]MaxRoiPool, Mod & Xor op support added

2020-06-04 Thread GitBox
masahi commented on pull request #5729: URL: https://github.com/apache/incubator-tvm/pull/5729#issuecomment-639273379 Thanks @siju-samuel This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [incubator-tvm] masahi merged pull request #5729: [ONNX]MaxRoiPool, Mod & Xor op support added

2020-06-04 Thread GitBox
masahi merged pull request #5729: URL: https://github.com/apache/incubator-tvm/pull/5729 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[incubator-tvm] branch master updated: [ONNX]MaxRoiPool, Mod & Xor op support added (#5729)

2020-06-04 Thread masahi
This is an automated email from the ASF dual-hosted git repository. masahi pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new fbc2b87 [ONNX]MaxRoiPool, Mod & Xor op

[GitHub] [incubator-tvm] handar423 commented on pull request #5695: fix small bug about dense_grad

2020-06-04 Thread GitBox
handar423 commented on pull request #5695: URL: https://github.com/apache/incubator-tvm/pull/5695#issuecomment-639246598 sorry for replying so late! So grateful if you are still here. Thank you for the guidance, I misunderstood the definition of batching and units ago and everything

[GitHub] [incubator-tvm] handar423 closed pull request #5695: fix small bug about dense_grad

2020-06-04 Thread GitBox
handar423 closed pull request #5695: URL: https://github.com/apache/incubator-tvm/pull/5695 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above

[GitHub] [incubator-tvm] leonwanghui opened a new pull request #5733: Make Tensor struct public in Rust runtime

2020-06-04 Thread GitBox
leonwanghui opened a new pull request #5733: URL: https://github.com/apache/incubator-tvm/pull/5733 Signed-off-by: leonwanghui Hi all, this PR is proposed to make some private member inside `Tensor` struct public so that it could be embedded into other crates.

[GitHub] [incubator-tvm] lixiaoquan commented on a change in pull request #5699: [Frontend][TensorFlow] Improve Control Flow and TensorArray

2020-06-04 Thread GitBox
lixiaoquan commented on a change in pull request #5699: URL: https://github.com/apache/incubator-tvm/pull/5699#discussion_r435668087 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -3194,6 +3191,55 @@ def _convert_operator(self, op_name, inputs, attrs,

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5619: Add Scatter to Topi/Relay/ONNX via hybrid script

2020-06-04 Thread GitBox
masahi commented on a change in pull request #5619: URL: https://github.com/apache/incubator-tvm/pull/5619#discussion_r435627742 ## File path: src/relay/op/tensor/transform.cc ## @@ -781,6 +781,53 @@ non-zero)doc" TVM_ADD_FILELINE) .set_attr("TOpPattern", kOpaque)

[GitHub] [incubator-tvm] comaniac opened a new pull request #5732: [TOPI][ARM] Fix reshape usage in ARM schedule

2020-06-04 Thread GitBox
comaniac opened a new pull request #5732: URL: https://github.com/apache/incubator-tvm/pull/5732 After #5429, it requires the new shape of `relay.reshape` can only be either `int`, `Tuple[int, ...]`, `List[int]`, or `Expr`. However, a use case in ARM conv2d alter layout passes `Tuple[int,

[GitHub] [incubator-tvm] zhiics commented on issue #5728: Mismatched new / delete [ ] in Relay VM

2020-06-04 Thread GitBox
zhiics commented on issue #5728: URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639156952 ah, I see. We should have used `delete[]`. You are welcome to send a PR. Thanks. This is an automated message

[GitHub] [incubator-tvm] tqchen commented on pull request #5601: [DataType] Add bfloat16

2020-06-04 Thread GitBox
tqchen commented on pull request #5601: URL: https://github.com/apache/incubator-tvm/pull/5601#issuecomment-639143314 https://github.com/apache/incubator-tvm/pull/5730 splits the two type codes, we only need to add BFLoat16 to the DataTypeCode. cc @Menooker

[GitHub] [incubator-tvm] tqchen commented on pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-04 Thread GitBox
tqchen commented on pull request #5730: URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639143025 @zhiics Yes the DLPack changes are needed for the followup PRs to use bfloat16 This is an automated

[incubator-tvm] branch master updated (34c95a8 -> 8a98782)

2020-06-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 34c95a8 [Frontend][TFLite] Add parser support for shape and range (#5329) add 8a98782 [REFACTOR]

[GitHub] [incubator-tvm] tqchen merged pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-04 Thread GitBox
tqchen merged pull request #5730: URL: https://github.com/apache/incubator-tvm/pull/5730 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
masahi commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435576994 ## File path: src/tir/transforms/lower_thread_allreduce.cc ## @@ -478,9 +478,20 @@ class ThreadAllreduceBuilder final : public StmtExprMutator {

[GitHub] [incubator-tvm] masahi commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
masahi commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435576740 ## File path: src/tir/transforms/lower_thread_allreduce.cc ## @@ -478,9 +478,20 @@ class ThreadAllreduceBuilder final : public StmtExprMutator {

[GitHub] [incubator-tvm] tqchen commented on issue #5728: Mismatched new / delete [ ] in Relay VM

2020-06-04 Thread GitBox
tqchen commented on issue #5728: URL: https://github.com/apache/incubator-tvm/issues/5728#issuecomment-639133231 cc @jroesch @zhiics This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [incubator-tvm] t-vi edited a comment on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi edited a comment on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639109441 That's the idea, yes. In my microbenchmark of the imagenet softmax on the Radeon VII, I'm going from ~140µs to ~14µs. The baseline from PyTorch (handcrafted but

[GitHub] [incubator-tvm] t-vi commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639109441 That's the idea, yes. In my microbenchmark of the imagenet softmax on the Radeon VII, I'm going from ~140µs to ~14µs. The baseline from PyTorch (handcrafted but somewhat

[GitHub] [incubator-tvm] trevor-m opened a new pull request #5731: [TensorFlow] Don't add cast for batch norm when type isn't changing

2020-06-04 Thread GitBox
trevor-m opened a new pull request #5731: URL: https://github.com/apache/incubator-tvm/pull/5731 TensorFlow batch norm op has two type attributes: `U` which is the type of the parameters scale, offset, mean, and variance. `T` which is the type of the input and output. The TF

[GitHub] [incubator-tvm] masahi commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
masahi commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-639099200 @t-vi This is great! Does this mean rocm backend would use shuffle instruction to compute reduction, such as in softmax?

[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-04 Thread GitBox
tqchen edited a comment on pull request #5723: URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174 This is similiar to what we faced in the case like ceil/trunc/round. The ideal is to detect if the data is too close to the boundary, then we try to add a number to

[GitHub] [incubator-tvm] tqchen edited a comment on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-04 Thread GitBox
tqchen edited a comment on pull request #5723: URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174 This is similiar to what we faced in the case like ceil/trunc See an example here:

[GitHub] [incubator-tvm] tqchen commented on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-04 Thread GitBox
tqchen commented on pull request #5723: URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639094174 This is a similiar we faced in the case like ceil/trunc See an example here:

[GitHub] [incubator-tvm] anijain2305 commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-06-04 Thread GitBox
anijain2305 commented on pull request #4805: URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-639071565 It seems you still have old `3rdparty/dmlc-core'. You can check that by clicking on "Files changed" tab.

[GitHub] [incubator-tvm] notoraptor commented on pull request #5716: [topi][relay] Add operation gather to relay.

2020-06-04 Thread GitBox
notoraptor commented on pull request #5716: URL: https://github.com/apache/incubator-tvm/pull/5716#issuecomment-639035230 @mbrookhart Hi ! There is a difference at least in the output shape for take:

[GitHub] [incubator-tvm] t-vi commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435466266 ## File path: tests/python/unittest/test_tir_transform_lower_warp_memory.py ## @@ -249,9 +245,13 @@ def check(m): B_np = A_np + 1

[GitHub] [incubator-tvm] t-vi commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435423642 ## File path: src/target/llvm/intrin_rule_rocm.cc ## @@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args, TVMRetValue* rv) { *rv

[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-04 Thread GitBox
junrushao1994 commented on pull request #5730: URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639018766 Will take a look tonight! Thank you! This is an automated message from the Apache Git Service. To

[GitHub] [incubator-tvm] abergeron commented on pull request #5723: Fix the random seed for test_fmod since it fails way too often otherwise

2020-06-04 Thread GitBox
abergeron commented on pull request #5723: URL: https://github.com/apache/incubator-tvm/pull/5723#issuecomment-639011278 From what I can seen the problem comes when result of the fmod() is very small. It would happen for something like `20.0001 % 10`. I'm not sure how to

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-06-04 Thread GitBox
anijain2305 commented on pull request #5329: URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-639010961 Thanks @dhruvaray @siju-samuel This is merged! This is an automated message from the Apache Git

[GitHub] [incubator-tvm] anijain2305 merged pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-06-04 Thread GitBox
anijain2305 merged pull request #5329: URL: https://github.com/apache/incubator-tvm/pull/5329 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

[incubator-tvm] branch master updated (c2e248f -> 34c95a8)

2020-06-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository. anijain2305 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from c2e248f [TOPI,RELAY][TFLITE] Sparse to dense operator (#5447) add 34c95a8 [Frontend][TFLite] Add

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-06-04 Thread GitBox
anijain2305 commented on pull request #5447: URL: https://github.com/apache/incubator-tvm/pull/5447#issuecomment-639010813 Thanks @dhruvaray @siju-samuel This is merged! This is an automated message from the Apache Git

[incubator-tvm] branch master updated (490510d -> c2e248f)

2020-06-04 Thread anijain2305
This is an automated email from the ASF dual-hosted git repository. anijain2305 pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git. from 490510d codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx (#5726) add

[GitHub] [incubator-tvm] anijain2305 merged pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-06-04 Thread GitBox
anijain2305 merged pull request #5447: URL: https://github.com/apache/incubator-tvm/pull/5447 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

[GitHub] [incubator-tvm] tqchen opened a new pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-04 Thread GitBox
tqchen opened a new pull request #5730: URL: https://github.com/apache/incubator-tvm/pull/5730 We use a single enum(TypeCode) to represent ArgTypeCode and DLDataTypeCode. However, as we start to expand more data types, it is clear that argument type code(in the FFI convention) and

[GitHub] [incubator-tvm] tqchen commented on pull request #5730: [REFACTOR] Separate ArgTypeCode from DLDataTypeCode

2020-06-04 Thread GitBox
tqchen commented on pull request #5730: URL: https://github.com/apache/incubator-tvm/pull/5730#issuecomment-639000274 cc @zhiics @yzhliu @jroesch @junrushao1994 This is an automated message from the Apache Git Service. To

[GitHub] [incubator-tvm] t-vi commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435423642 ## File path: src/target/llvm/intrin_rule_rocm.cc ## @@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args, TVMRetValue* rv) { *rv

[GitHub] [incubator-tvm] t-vi commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
t-vi commented on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638992030 @mei-ye Cool! Yes. I ran into trouble when the target info erroneously specified 1 thread per warp for ROCm, which would look similar but not for the same reason. I'm glad you

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5495: [Relay, Topi] [Frontend][TFLite, MXNet] ReverseSequence operator

2020-06-04 Thread GitBox
anijain2305 commented on pull request #5495: URL: https://github.com/apache/incubator-tvm/pull/5495#issuecomment-638988738 @siju-samuel Pinging for review if interested :) This is an automated message from the Apache Git

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5329: [Frontend][TFLite] Add parser support for shape and range

2020-06-04 Thread GitBox
anijain2305 commented on pull request #5329: URL: https://github.com/apache/incubator-tvm/pull/5329#issuecomment-638987607 @siju-samuel Can you please approve the PR if you are ok with the changes?

[GitHub] [incubator-tvm] anijain2305 commented on pull request #5447: [TOPI,RELAY][TFLITE] Sparse to dense operator

2020-06-04 Thread GitBox
anijain2305 commented on pull request #5447: URL: https://github.com/apache/incubator-tvm/pull/5447#issuecomment-638985828 @siju-samuel Please approve the PR if you are ok with the changes This is an automated message from

[GitHub] [incubator-tvm] anijain2305 commented on pull request #4805: [Frontend][TFlite] Add parser support for relu6, leaky_relu, relu_n1_to_1, log_softmax

2020-06-04 Thread GitBox
anijain2305 commented on pull request #4805: URL: https://github.com/apache/incubator-tvm/pull/4805#issuecomment-638984870 @inadob Your changes look better now. Can you please rebase? (`git submodule update --init --recursive`)

[GitHub] [incubator-tvm] tqchen commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
tqchen commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638974554 @t-vi yes please rebase to master while addressing the comments. This is an automated message from the

[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
wpan11nv commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435391687 ## File path: tests/python/unittest/test_tir_transform_lower_warp_memory.py ## @@ -249,9 +245,13 @@ def check(m): B_np = A_np + 1

[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
wpan11nv commented on a change in pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#discussion_r435386022 ## File path: src/target/llvm/intrin_rule_rocm.cc ## @@ -40,8 +41,59 @@ inline void DispatchExternOCML(const TVMArgs& args, TVMRetValue* rv) {

[GitHub] [incubator-tvm] wpan11nv commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
wpan11nv commented on a change in pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435383834 ## File path: src/target/llvm/codegen_llvm.cc ## @@ -736,7 +736,40 @@ llvm::Function* CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id,

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
ANSHUMAN87 commented on pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638958232 @junrushao1994 : Thanks a lot for clarifying! :+1: This is an automated message from the Apache Git

[GitHub] [incubator-tvm] majiang31312 commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
majiang31312 commented on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638956688 @t-vi Thanks for the advice. In my opinion this is not a backend problem, we can triger it in the cuda backend (my test case above is using cuda).

[GitHub] [incubator-tvm] majiang31312 edited a comment on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
majiang31312 edited a comment on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638953820 The fix seems quite simple, but I'm not sure whether it's complete. Please take a look at the Discussion section. Thanks! @tqchen @wpan11nv Problem:

[GitHub] [incubator-tvm] majiang31312 commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
majiang31312 commented on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638953820 The fix seems quite simple, but I'm not sure whether it's complete. Please take a look at the Discussion section. Thanks! @tqchen @wpan11nv Problem: when

[incubator-tvm-site] branch asf-site updated: Build at Thu Jun 4 09:03:33 PDT 2020

2020-06-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/asf-site by this push: new beecf83 Build at Thu Jun 4

[incubator-tvm-site] branch master updated: Add microtvm blog post (#9)

2020-06-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm-site.git The following commit(s) were added to refs/heads/master by this push: new 179bcb4 Add microtvm blog post

[GitHub] [incubator-tvm] junrushao1994 commented on pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
junrushao1994 commented on pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638947410 @ANSHUMAN87 yeah you are right. This is all about the type name. It is a bit tricky, because the JSON string was generated after 0.6.0 where type name is

[GitHub] [incubator-tvm] t-vi commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638944272 Just a quick note: Of the two commits, only the second is new to this PR. If you want me to rebase this on master, let me know.

[incubator-tvm] branch master updated: codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx (#5726)

2020-06-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 490510d codegen llvm: move

[GitHub] [incubator-tvm] tqchen commented on pull request #5726: codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx

2020-06-04 Thread GitBox
tqchen commented on pull request #5726: URL: https://github.com/apache/incubator-tvm/pull/5726#issuecomment-638939983 Thanks @t-vi ! This is an automated message from the Apache Git Service. To respond to the message, please

[GitHub] [incubator-tvm] tqchen merged pull request #5726: codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx

2020-06-04 Thread GitBox
tqchen merged pull request #5726: URL: https://github.com/apache/incubator-tvm/pull/5726 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[GitHub] [incubator-tvm] tqchen commented on pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
tqchen commented on pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638925609 Thansk @junrushao1994 This is an automated message from the Apache Git Service. To respond to the

[incubator-tvm] branch master updated: Fix runtime::String backward compatibility in JSON (#5725)

2020-06-04 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new 8935990 Fix runtime::String backward

[GitHub] [incubator-tvm] tqchen merged pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
tqchen merged pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to

[GitHub] [incubator-tvm] tqchen commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
tqchen commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638924385 cc @wpan11nv @masahi This is an automated message from the Apache Git Service. To respond to the message,

[GitHub] [incubator-tvm] yongwww commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-04 Thread GitBox
yongwww commented on a change in pull request #4312: URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435344564 ## File path: include/tvm/relay/attrs/transform.h ## @@ -210,14 +210,22 @@ struct SplitAttrs : public tvm::AttrsNode { /*! \brief Attributes for

[GitHub] [incubator-tvm] siju-samuel opened a new pull request #5729: [ONNX]MaxRoiPool, Mod & Xor op support added

2020-06-04 Thread GitBox
siju-samuel opened a new pull request #5729: URL: https://github.com/apache/incubator-tvm/pull/5729 - MaxRoiPool - Mod - Xor @masahi @FrozenGene please help me to review this PR. TIA This is an automated

[GitHub] [incubator-tvm] GalMoore commented on issue #4412: Python binding: No module named 'topi'

2020-06-04 Thread GitBox
GalMoore commented on issue #4412: URL: https://github.com/apache/incubator-tvm/issues/4412#issuecomment-638884194 For me the issue was that the tvm I built was not in PYTHONPATH. Solution from [here:](https://docs.tvm.ai/install/from_source.html) ``` export TVM_HOME=/path/to/tvm

[GitHub] [incubator-tvm] akosik-anyvision opened a new issue #5728: Mismatched new / delete [] in Relay VM

2020-06-04 Thread GitBox
akosik-anyvision opened a new issue #5728: URL: https://github.com/apache/incubator-tvm/issues/5728 There seem to be mismatched new / delete[] statements in the following lines:

[GitHub] [incubator-tvm] t-vi commented on pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
t-vi commented on pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-638823562 @wpan11nv Thanks for your offer to help. I submitted the clean-up #5726 and then in #5727 I add ROCm warp reductions. One of the things I did was to avoid assuming a fixed

[GitHub] [incubator-tvm] t-vi commented on pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi commented on pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727#issuecomment-638817130 @masahi Could I interest you in this? @tqchen This is the followup to the discussion of #5600

[GitHub] [incubator-tvm] t-vi opened a new pull request #5727: ROCm warp shuffles and reductions

2020-06-04 Thread GitBox
t-vi opened a new pull request #5727: URL: https://github.com/apache/incubator-tvm/pull/5727 This adds warp shuffle intrinsics to ROCm and enables reductions. - There was at least one hardcoded 32 threads per warp assumption in `lower_thread_allreduce`. - I have tentatively

[GitHub] [incubator-tvm] t-vi edited a comment on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
t-vi edited a comment on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638753031 This sounds similar to me to the symptoms discussed in the recent posts in #5600. So quite likely, making the cuda softmax schedule specific to cuda would fix this (did

[GitHub] [incubator-tvm] t-vi commented on pull request #5726: codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx

2020-06-04 Thread GitBox
t-vi commented on pull request #5726: URL: https://github.com/apache/incubator-tvm/pull/5726#issuecomment-638808130 That was a bit much of moving... Fixed. I also changed the softmax schedule to only activate warp shuffle reductions on cuda/nvptx. This should fix #5686.

[GitHub] [incubator-tvm] t-vi commented on issue #5686: [vulkan] Assertion in tir/transforms/lower_thread_allreduce.cc", line 157 TVMError: Check failed: v:

2020-06-04 Thread GitBox
t-vi commented on issue #5686: URL: https://github.com/apache/incubator-tvm/issues/5686#issuecomment-638753031 This sounds similar to me to the symptoms discussed in the recent posts in #5600. So quite likely, making the cuda softmax schedule specific to cuda would fix this. Of course,

[GitHub] [incubator-tvm] t-vi opened a new pull request #5726: codegen llvm: move nvptx-specific intrinsic handling into codegen_nvptx

2020-06-04 Thread GitBox
t-vi opened a new pull request #5726: URL: https://github.com/apache/incubator-tvm/pull/5726 See discussion in #5600. I'm also throwing in a pointer lifetime fix for the context held by NVPTX because otherwise `topi/tests/python/test_topi_softmax.py` would sefault for me. With the

[GitHub] [incubator-tvm] ANSHUMAN87 edited a comment on pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
ANSHUMAN87 edited a comment on pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638734857 Thanks @junrushao1994 ! As i remember, I had handled version upgrade for "GlobalVar" in PR (#5547). So maybe we need to discuss how it got bypassed? I

[GitHub] [incubator-tvm] ANSHUMAN87 commented on pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
ANSHUMAN87 commented on pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725#issuecomment-638734857 Thanks @junrushao1994 ! As i remember, I had handled version upgrade for "GlobalVar" in PR (#5547). So maybe we need to discuss how it got bypassed? I saw the

[GitHub] [incubator-tvm] t-vi commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
t-vi commented on a change in pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435088925 ## File path: topi/python/topi/cuda/softmax.py ## @@ -39,6 +39,7 @@ def schedule_softmax(outs): outs = [outs] if isinstance(outs,

[GitHub] [incubator-tvm] t-vi commented on a change in pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
t-vi commented on a change in pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#discussion_r435088614 ## File path: src/target/llvm/codegen_llvm.cc ## @@ -736,7 +736,40 @@ llvm::Function* CodeGenLLVM::GetIntrinsicDecl(llvm::Intrinsic::ID id,

[GitHub] [incubator-tvm] t-vi edited a comment on pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
t-vi edited a comment on pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-638622419 I'm adding shfl intrinsics to the rocm bits (using `tvm.intrin.rule.rocm.tvm_warp_shuffle /-up/-down` definitions). I'll probably run into the nvptx bits in the

[GitHub] [incubator-tvm] kevinthesun commented on pull request #5724: [AutoTVM, Relay] Clear compile engine after task extraction

2020-06-04 Thread GitBox
kevinthesun commented on pull request #5724: URL: https://github.com/apache/incubator-tvm/pull/5724#issuecomment-638639332 Thanks @vinx13 @comaniac This is an automated message from the Apache Git Service. To respond to the

[incubator-tvm] branch master updated: [AutoTVM, Relay] Clear compile engine after task extraction (#5724)

2020-06-04 Thread kevinthesun
This is an automated email from the ASF dual-hosted git repository. kevinthesun pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-tvm.git The following commit(s) were added to refs/heads/master by this push: new a642089 [AutoTVM, Relay] Clear

[GitHub] [incubator-tvm] kevinthesun merged pull request #5724: [AutoTVM, Relay] Clear compile engine after task extraction

2020-06-04 Thread GitBox
kevinthesun merged pull request #5724: URL: https://github.com/apache/incubator-tvm/pull/5724 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL

[GitHub] [incubator-tvm] leonwanghui commented on pull request #5527: [Rust] Second stage of Rust Refactor

2020-06-04 Thread GitBox
leonwanghui commented on pull request #5527: URL: https://github.com/apache/incubator-tvm/pull/5527#issuecomment-638632884 @jroesch It seems quite promising for Rust bindings, so any updates on this PR? This is an automated

[GitHub] [incubator-tvm] kevinthesun commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-04 Thread GitBox
kevinthesun commented on a change in pull request #4312: URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435020698 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -2027,6 +2081,8 @@ def _impl(inputs, attr, params, mod): 'Mod'

[GitHub] [incubator-tvm] junrushao1994 opened a new pull request #5725: [Bugfix][Serialization] Fix runtime::String backward compatibility in JSON

2020-06-04 Thread GitBox
junrushao1994 opened a new pull request #5725: URL: https://github.com/apache/incubator-tvm/pull/5725 @antinucleon sent me a JSON file which cannot be loaded in current HEAD on master. After bisecting around, I found that there are some missing parts (or just typos I think) in

[GitHub] [incubator-tvm] icemelon9 commented on a change in pull request #4312: [TOPI][Relay][OP] Dynamic NMS and strided_slice

2020-06-04 Thread GitBox
icemelon9 commented on a change in pull request #4312: URL: https://github.com/apache/incubator-tvm/pull/4312#discussion_r435011699 ## File path: python/tvm/relay/frontend/tensorflow.py ## @@ -2027,6 +2081,8 @@ def _impl(inputs, attr, params, mod): 'Mod'

[GitHub] [incubator-tvm] t-vi commented on pull request #5600: [TOPI] Improve CUDA softmax scheduling

2020-06-04 Thread GitBox
t-vi commented on pull request #5600: URL: https://github.com/apache/incubator-tvm/pull/5600#issuecomment-638622419 I'm adding shfl intrinsics to the rocm bits (using `tvm.intrin.rule.rocm.tvm_warp_shuffle /-up/-down` definitions). I'm currently seeing a funny effect where I get a