[I] [Bug] MetaScheduler Literal value exceeds maximum of int32 [tvm]

2023-10-25 Thread via GitHub


malixian opened a new issue, #15987:
URL: https://github.com/apache/tvm/issues/15987

   ### Expected behavior
   I try to use MetaScheduler to tuning matmul, and the dimensions of the 
matrix are m=8192, n=14336, k=8192.
   When n=8192, everything is ok, but once m or n is equal to 14336, an error 
`RuntimeError: parallel_for_dynamic error with [02:23:57] 
/home/malixian/repos/tensorir/tvm/src/ir/expr.cc:88: InternalError: Check 
failed: value < 1LL << (dtype.bits() - 1) (8589934591 vs. 2147483648) : 
ValueError: Literal value 8589934591 exceeds maximum of int32` will occur. BTW, 
it is ok when k equals 14336.
   According to the error message, I tried to comment out the` ICHECK` code of 
the function IntImm in expr.cc and it worked normally,  again.
   I think the `DataType` of Tir should be expanded to suit this case.
   
   ### Actual behavior
   
   error `RuntimeError: parallel_for_dynamic error with [02:23:57] 
/home/malixian/repos/tensorir/tvm/src/ir/expr.cc:88: InternalError: Check 
failed: value < 1LL << (dtype.bits() - 1) (8589934591 vs. 2147483648) : 
ValueError: Literal value 8589934591 exceeds maximum of int32`
   
   ### Environment
   
   TVM version is '0.15.dev0'
   
   ### Steps to reproduce
   
   
   ```
   def matmul_fp16(M: int, N: int, K: int, in_dtype: str, out_dtype: str):
   x = te.placeholder((M, K), name="X", dtype=in_dtype)
   y = te.placeholder((K, N), name="Y", dtype=in_dtype)
   k = te.reduce_axis((0, K), name="k")
   c = te.compute(  # pylint: disable=invalid-name
   (M, N),
   lambda i, j: te.sum(x[i][k].astype(out_dtype) * 
y[k][j].astype(out_dtype), axis=[k]),
   name="C",
   )
   return (x, y, c)
   
   
 def tune(in_dtype, out_dtype):
 target = Target("nvidia/nvidia-a100")
 M, N, K = 8192, 14336, 8192
 func = te.create_prim_func(
 matmul_fp16(M=M, N=N, K=K, in_dtype=in_dtype, out_dtype=out_dtype)
 ).with_attr({"global_symbol": "main"})
   
 space = ms.space_generator.PostOrderApply(
 sch_rules="cuda-tensorcore",
 postprocs="cuda-tensorcore",
 mutator_probs="cuda-tensorcore",
 )
   
 mod = tvm.IRModule({"main": func})
 with tempfile.TemporaryDirectory() as work_dir:
 db = ms.tir_integration.tune_tir(
 mod=mod,
 target=target,
 work_dir=work_dir,
 max_trials_global=32,
 builder=LocalBuilder(
 f_build="meta_schedule.builder.async_build", 
initializer=initializer
 ),
 space=space,
 )
 sch = db.query_schedule(mod, target=target, workload_name="main")
 with tvm.transform.PassContext(config={"tir.use_async_copy": 1}):
 rt_mod = tvm.build(sch.mod, target=target)
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch nightly updated (3c4ee86d97 -> de56d8c950)

2023-10-25 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch nightly
in repository https://gitbox.apache.org/repos/asf/tvm.git


from 3c4ee86d97 [CMake] Fix order of GNUInstallDirs module (#15966)
 add 885fc27390 [TVMScript][TIR] Pretty print TIR LLVM function name 
(#15953)
 add de56d8c950 [Hotfix] Mark python-FFI handling with TVM_DLL (#15970)

No new revisions were added by this update.

Summary of changes:
 include/tvm/runtime/c_runtime_api.h| 48 ++
 python/tvm/tir/op.py   |  9 ++--
 src/runtime/c_runtime_api.cc   | 14 +++
 src/script/printer/tir/expr.cc | 25 +++
 tests/python/unittest/test_tir_ops.py  |  7 
 .../python/unittest/test_tvmscript_printer_tir.py  |  7 
 6 files changed, 98 insertions(+), 12 deletions(-)



Re: [PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-25 Thread via GitHub


JackWeiw commented on PR #15986:
URL: https://github.com/apache/tvm/pull/15986#issuecomment-1780389209

   CC @Lunderberg @wrongtest-intellif


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Fix][TIR]fix symbolic strides lower [tvm]

2023-10-25 Thread via GitHub


JackWeiw opened a new pull request, #15986:
URL: https://github.com/apache/tvm/pull/15986

   `compact_buffer_region` PASS modify shared buffer stride[0] to
   
   `T.int64(72) * T.min((n + T.int64(63)) // T.int64(64) * T.int64(64), 
T.int64(96))` and stride[1] is `T.int64(72)`
   but in LowerOpaqueBlock PASS it report error:
   InternalError: Check failed: (is_zero(floormod(buffer->strides[i - 1], 
buffer->strides[i]))) is false:
   
   For more detaied discuss, see 
[here](https://discuss.tvm.apache.org/t/bug-tir-symbolic-floormod/15826)
   
   Another bug occurs in  PASS InjectPTXAsyncCopy .
   that is dst_offset.dtype could be int64, the dtype of PrimExpr(index_factor) 
would be set to default to int32.
   cause dtype inconsistent when calling tir::Mul. 
   
   To reproduce the problem in InjectPTXAsyncCopy, see script 
[here](https://gist.github.com/JackWeiw/5b80956ab44c0f63d4f434f18f42cc89)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] metal perf [tvm]

2023-10-25 Thread via GitHub


spectrometerHBH opened a new pull request, #15985:
URL: https://github.com/apache/tvm/pull/15985

   On M2 Ultra, q4f16_1:
   
   7B: 74 tok/s -> 89 tok/s


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][MSC][M1.3] Add translate && codegen for tensorrt [tvm]

2023-10-25 Thread via GitHub


Hzfengsy merged PR #15950:
URL: https://github.com/apache/tvm/pull/15950


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Fix][TIR] Symbolic strides lower [tvm]

2023-10-25 Thread via GitHub


JackWeiw closed pull request #15984: [Fix][TIR] Symbolic strides lower
URL: https://github.com/apache/tvm/pull/15984


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Fix][TIR] Symbolic strides lower [tvm]

2023-10-25 Thread via GitHub


JackWeiw opened a new pull request, #15984:
URL: https://github.com/apache/tvm/pull/15984

   compact_buffer_region PASS modify shared buffer stride[0] to
   
   `T.int64(72) * T.min((n + T.int64(63)) // T.int64(64) * T.int64(64), 
T.int64(96))` and stride[1] is `T.int64(72)`
   but in `LowerOpaqueBlock` PASS it report error:
   InternalError: Check failed: (is_zero(floormod(buffer->strides[i - 1], 
buffer->strides[i]))) is false:
   
   For more detaied discuss, see 
[here](https://discuss.tvm.apache.org/t/bug-tir-symbolic-floormod/15826)
   
   [here](https://gist.github.com/JackWeiw/5b80956ab44c0f63d4f434f18f42cc89) is 
the script to reproduce dtype mismatch in InjectPTXAsyncCopy PASS.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub


JackWeiw commented on PR #15961:
URL: https://github.com/apache/tvm/pull/15961#issuecomment-1780341103

   > It looks like this PR isn't unity-specific. Can the PR be applied to the 
`main` branch instead, so we get the bugfix on both branches?
   
![1](https://github.com/apache/tvm/assets/126441921/d3c4ff70-d608-424b-8b28-c2ef94766b99)
   
   Yes! Should i open a new PR like above? Or recommit the code to main-branch 
to open a new PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity][BYOC] Support variable-length attention by flash attention (#15959)

2023-10-25 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new ef39f37f49 [Unity][BYOC] Support variable-length attention by flash 
attention (#15959)
ef39f37f49 is described below

commit ef39f37f49c8dbb885d56a92e97427d3e4ec10c4
Author: masahi 
AuthorDate: Thu Oct 26 07:43:25 2023 +0900

[Unity][BYOC] Support variable-length attention by flash attention (#15959)

* works

* add test

* fix tests
---
 3rdparty/libflash_attn|   2 +-
 python/tvm/contrib/cutlass/attention_operation.py |  56 +++
 python/tvm/contrib/cutlass/gen_tensor_op.py   |  16 +-
 tests/python/relax/test_codegen_cutlass.py| 190 --
 4 files changed, 206 insertions(+), 58 deletions(-)

diff --git a/3rdparty/libflash_attn b/3rdparty/libflash_attn
index c1d793ad93..55d3603f74 16
--- a/3rdparty/libflash_attn
+++ b/3rdparty/libflash_attn
@@ -1 +1 @@
-Subproject commit c1d793ad939c8ec3cec351db84bc80808e4d34c3
+Subproject commit 55d3603f741eb68e82640ff55ccea4b17dd8053e
diff --git a/python/tvm/contrib/cutlass/attention_operation.py 
b/python/tvm/contrib/cutlass/attention_operation.py
index 5579819001..7084a105c8 100644
--- a/python/tvm/contrib/cutlass/attention_operation.py
+++ b/python/tvm/contrib/cutlass/attention_operation.py
@@ -279,3 +279,59 @@ def instantiate_flash_attention_template(attrs):
 return substitute_template(template_stacked, attrs)
 
 return substitute_template(template, attrs)
+
+
+def instantiate_flash_attention_var_len_template(attrs):
+"""Return host code for flash attention with variable sequence lengths."""
+
+template = """
+int _max_seqlen_q, _max_seqlen_k;
+cudaMemcpy(&_max_seqlen_q, (int32_t*)${max_seqlen_q}->data, 
sizeof(int32_t),
+   cudaMemcpyDeviceToHost);
+cudaMemcpy(&_max_seqlen_k, (int32_t*)${max_seqlen_k}->data, 
sizeof(int32_t),
+   cudaMemcpyDeviceToHost);
+
+int batch_size = ${seqstart_q}->shape[0] - 1;
+
+int q_head_stride = ${head_dim};
+int k_head_stride = ${head_dim};
+int v_head_stride = ${head_dim};
+int o_head_stride = ${head_dim};
+int q_row_stride = q_head_stride * ${num_q_heads};
+int k_row_stride = k_head_stride * ${num_kv_heads};
+int v_row_stride = v_head_stride * ${num_kv_heads};
+int o_row_stride = o_head_stride * ${num_q_heads};
+
+auto func = tvm::runtime::Registry::Get("runtime.get_cuda_stream");
+ICHECK(func != nullptr);
+cudaStream_t stream = static_cast((*func)().operator 
void*());
+
+flash_attn::flash_attention_var_len_forward(
+static_cast(${query}->data),
+   static_cast(${key}->data),
+   static_cast(${value}->data),
+static_cast(${seqstart_q}->data),
+static_cast(${seqstart_k}->data),
+   static_cast(out0->data),
+   batch_size,
+   _max_seqlen_q,
+   _max_seqlen_k,
+   ${num_q_heads},
+   ${num_kv_heads},
+   ${head_dim},
+   q_head_stride,
+   k_head_stride,
+   v_head_stride,
+   o_head_stride,
+   q_row_stride,
+   k_row_stride,
+   v_row_stride,
+   o_row_stride,
+   ${scale},
+   ${is_causal},
+${is_causal} ? _max_seqlen_k : -1,
+${window_size_right},
+   stream);
+"""
+
+return substitute_template(template, attrs)
diff --git a/python/tvm/contrib/cutlass/gen_tensor_op.py 
b/python/tvm/contrib/cutlass/gen_tensor_op.py
index e86a02df60..d42791d71b 100644
--- a/python/tvm/contrib/cutlass/gen_tensor_op.py
+++ b/python/tvm/contrib/cutlass/gen_tensor_op.py
@@ -32,6 +32,7 @@ from . import _ffi_api as ffi
 from .attention_operation import (
 instantiate_attention_template,
 instantiate_flash_attention_template,
+instantiate_flash_attention_var_len_template,
 )
 from .conv2d_operation import instantiate_conv2d_template
 from .gemm_operation import instantiate_gemm_template, emit_fp16A_intB_matmul
@@ -778,7 +779,6 @@ def instantiate_template(func_name, annotations, func_args):
 )
 # Flash v2 is currently not supported for sm < 80
 and int(annotations["arch"]) >= 80
-and not is_var_len
 )
 
 if "window_size" in annotations:
@@ -789,15 +789,23 @@ def instantiate_template(func_name, annotations, 
func_args):

Re: [PR] [Unity][BYOC] Support variable-length attention by flash attention [tvm]

2023-10-25 Thread via GitHub


vinx13 merged PR #15959:
URL: https://github.com/apache/tvm/pull/15959


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub


vinx13 merged PR #15977:
URL: https://github.com/apache/tvm/pull/15977


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` (#15977)

2023-10-25 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 04c6863a25 [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + 
`ThreadedSession` (#15977)
04c6863a25 is described below

commit 04c6863a25747d0fa745b01fcf47696ba85e1388
Author: masahi 
AuthorDate: Thu Oct 26 07:42:10 2023 +0900

[Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` 
(#15977)

Fix cuBLAS BYOC compatibilty with Disco with ThreadedSession
---
 src/runtime/contrib/cublas/cublas_json_runtime.cc | 68 +--
 1 file changed, 51 insertions(+), 17 deletions(-)

diff --git a/src/runtime/contrib/cublas/cublas_json_runtime.cc 
b/src/runtime/contrib/cublas/cublas_json_runtime.cc
index 9617559d7e..c6916d4f86 100644
--- a/src/runtime/contrib/cublas/cublas_json_runtime.cc
+++ b/src/runtime/contrib/cublas/cublas_json_runtime.cc
@@ -49,21 +49,69 @@ class CublasJSONRuntime : public JSONRuntimeBase {
 
   void Init(const Array& consts) override {}
 
+  PackedFunc GetFunction(const String& name, const ObjectPtr& 
sptr_to_self) override {
+// JSONRuntimeBase::SetInputOutputBuffers(...) is not thread safe. Since 
CublasJSONRuntime
+// can be used by multiple GPUs running on different threads, we avoid 
using that function
+// and directly call cuBLAS on the inputs from TVMArgs.
+if (this->symbol_name_ == name) {
+  return PackedFunc([sptr_to_self, this](TVMArgs args, TVMRetValue* rv) {
+ICHECK(this->initialized_) << "The module has not been initialized";
+this->Run(args);
+  });
+} else {
+  return JSONRuntimeBase::GetFunction(name, sptr_to_self);
+}
+  }
+
   const char* type_key() const override { return "cublas_json"; }  // May be 
overridden
 
-  void Run() override {
+  void Run(TVMArgs args) {
 auto* entry_ptr = tvm::contrib::CuBlasLtThreadEntry::ThreadLocal();
 
 auto func = tvm::runtime::Registry::Get("runtime.get_cuda_stream");
 ICHECK(func != nullptr);
 cudaStream_t stream = static_cast((*func)().operator 
void*());
 
+std::vector dl_tensors(NumEntries());
+
+for (size_t i = 0; i < static_cast(args.size()); i++) {
+  auto eid = i < input_var_eid_.size() ? input_var_eid_[i]
+   : EntryID(outputs_[i - 
input_var_eid_.size()]);
+  ICHECK(args[i].type_code() == kTVMNDArrayHandle || args[i].type_code() 
== kTVMDLTensorHandle)
+  << "Expect NDArray or DLTensor as inputs";
+
+  const DLTensor* arg;
+  if (args[i].IsObjectRef()) {
+NDArray arr = args[i];
+arg = arr.operator->();
+  } else {
+arg = args[i].operator DLTensor*();
+  }
+
+  dl_tensors[eid] = arg;
+}
+
+auto get_input = [this, &dl_tensors](const JSONGraphNode& node, int idx) {
+  ICHECK_LT(idx, node.GetInputs().size());
+  auto eid = EntryID(node.GetInputs()[idx]);
+  ICHECK(eid < dl_tensors.size());
+  return dl_tensors[eid];
+};
+
+auto get_inputs = [=](const JSONGraphNode& node, bool has_bias) {
+  const DLTensor* bias = nullptr;
+  if (has_bias) {
+bias = get_input(node, 2);
+  }
+  return std::make_tuple(get_input(node, 0), get_input(node, 1), bias);
+};
+
 for (size_t i = 0; i < nodes_.size(); ++i) {
   const auto& node = nodes_[i];
   if (node.GetOpType() == "kernel") {
 auto op_name = node.GetOpName();
 uint32_t output_eid = EntryID(outputs_[0]);
-auto out_ptr = data_entry_[output_eid];
+auto out_ptr = dl_tensors[output_eid];
 bool transa = false;
 bool transb = false;
 cublasLtEpilogue_t epilogue = CUBLASLT_EPILOGUE_DEFAULT;
@@ -80,14 +128,6 @@ class CublasJSONRuntime : public JSONRuntimeBase {
   epilogue = CUBLASLT_EPILOGUE_BIAS;
 }
 
-auto get_inputs = [this](const JSONGraphNode& node, bool has_bias) {
-  const DLTensor* bias = nullptr;
-  if (has_bias) {
-bias = GetInput(node, 2);
-  }
-  return std::make_tuple(GetInput(node, 0), GetInput(node, 1), bias);
-};
-
 auto [a_ptr, b_ptr, bias_ptr] = get_inputs(node, epilogue != 
CUBLASLT_EPILOGUE_DEFAULT);
 
 tvm::contrib::CallCublasLt(entry_ptr->handle, stream, a_ptr, b_ptr, 
bias_ptr, out_ptr,
@@ -96,13 +136,7 @@ class CublasJSONRuntime : public JSONRuntimeBase {
 }
   }
 
- private:
-  const DLTensor* GetInput(const JSONGraphNode& node, const int idx) {
-ICHECK_LT(idx, node.GetInputs().size());
-auto eid = EntryID(node.GetInputs()[idx]);
-ICHECK(eid < data_entry_.size());
-return data_entry_[eid];
-  }
+  void Run() override { LOG(FATAL) << "Unreachable"; }
 };
 
 runtime::Module CublasJSONRuntimeCreate(String symbol_name, String graph_json,



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


slyubomirsky commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1780155887

   See also [this 
issue](https://discuss.tvm.apache.org/t/validity-of-common-subexpression-elimination-pass-that-simplifies-call-tir-args/15832).
 Allowing `call_tir` to be more general would allow us to avoid special 
handling for `call_tir` (and its variants) in that case. It's a question of 
which we'd prefer.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


tqchen commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1780084913

   Thanks for the PR, I know this is indeed a generalization and there are some 
tradeoffs to be considered here. Specifically, we should consider the following 
alternative:
   
   - C0: We enforce `call_tir` to only take explicit Tuple and enforce in 
well-form check
   
   This indeed limit what we can do in terms of language, but would greatly 
simplify the logic that leverages CallTIR. And such simplicity helps us in a 
lot of cases since passes are simpler and more passes that depends on 
`call_tir` pattern matching would enjoy that simplicity.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


tqchen commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372345395


##
include/tvm/relax/expr_functor.h:
##
@@ -278,6 +278,37 @@ class ExprVisitor : public ExprFunctor {
   virtual void VisitSpan(const Span& span);
   virtual void VisitPrimExpr(const PrimExpr& expr);
 
+  /*!
+   * \brief Look up the value bound to a variable.
+   * \param var The var to be looked up.
+   * \return The value bound to the input \p var.
+   * \note For function parameters, this function returns NullOpt.
+   */
+  inline Optional LookupBinding(const Var& var) {

Review Comment:
   I think we should instead enforce call_tir to be a more restricted form



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372341781


##
src/relax/transform/call_tir_rewrite.cc:
##
@@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator {
<< expr->struct_info_;
   }
 
-  Array args;
-  if (call->args[1].as()) {
-args = Downcast(call->args[1])->fields;
-// for call_tir_inplace, don't reinsert in-place args, only the newly 
allocated ones
-if (!is_inplace) {
-  args.insert(args.end(), outs.begin(), outs.end());
-} else {
-  for (size_t i = 0; i < outs.size(); i++) {
-if (inplace_attrs->inplace_indices[i].IntValue() == -1) {
-  args.push_back(outs[i]);
-}
+  Expr callee = call->args[0];
+  Expr arg_tuple = call->args[1];
+  Optional shape_tuple_of_tir_args = NullOpt;
+  if (call->args.size() > 2) {
+shape_tuple_of_tir_args = call->args[2];
+  }
+
+  while (true) {
+auto as_var = arg_tuple.as();
+if (!as_var) break;
+
+auto bound_expr = LookupBinding(as_var.value());
+if (!bound_expr) break;
+
+arg_tuple = bound_expr.value();
+  }
+
+  Array args = [&]() {
+if (auto ptr = arg_tuple.as()) {
+  return ptr->fields;
+} else if (auto ptr = 
arg_tuple->struct_info_.as()) {
+  size_t n_args = ptr->fields.size();
+  Array args;
+  for (size_t i = 0; i < n_args; i++) {
+args.push_back(TupleGetItem(arg_tuple, i));
   }
+  return args;
+} else {
+  LOG(FATAL) << "Lowering of " << call
+ << " requires knowing how many arguments are passed to 
the function.  "
+ << "However, the tuple of arguments " << arg_tuple
+ << " is not itself a tuple, "
+ << "nor does its struct info " << GetStructInfo(arg_tuple)
+ << " define the number of arguments.";
 }
+  }();

Review Comment:
   I tend to use this construction for a couple of reasons.
   
   * Avoid ever having a partially-initialized variable.
   * Limit the scope of temporary variables that should only be used in the 
initialization.
   * Simplify nested if/else cases with early returns.
   
   Effectively, the immediately-invoked lambda expression acts as a block scope 
with a return type, similar to Rust's braces (e.g. The value of `{let i = 5; 
i+1}` is `6`) or relax's SeqExpr.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780074018

   > ah i see, one way to get around is define the callback as a global test 
function and call that with call_packed.
   
   That's what I ended up doing, with a global definition which can be looked 
up at runtime.  I'd been hoping that there would be a way to avoid dependencies 
on global state, out of a general principle.  (And, if relax functions could 
accept a `R.Callable` argument, that would also be very useful for 
`LazyTransformParams`.)
   
   Rebased onto head to re-run CI.  Previous failures were present in unity 
head, and have been resolved.  (See [this 
comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for 
details.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [FFI] Allow IntImm arguments to PackedFunc with int parameter [tvm]

2023-10-25 Thread via GitHub


Lunderberg opened a new pull request, #15983:
URL: https://github.com/apache/tvm/pull/15983

   TVM containers, such as tvm::runtime::Array, require the contained objects 
to inherit from `ObjectRef`.  As a result, the wrapper types `IntImm`, 
`FloatImm`, and `StringImm` are often used to allow native types in the TVM 
containers.  Conversions into these wrapper type may be required when using a 
container, and may be performed automatically when passing an object across the 
FFI.  By also providing conversion to an unwrapped type, these automatic 
conversions are transparent become transparent to users.
   
   The trait can be specialized to add type specific conversion logic from the 
TVMArgvalue and TVMRetValue.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity] Support symbolic PrimValue arguments (#15980)

2023-10-25 Thread masahi
This is an automated email from the ASF dual-hosted git repository.

masahi pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 7ef36ebb5d [Unity] Support symbolic PrimValue arguments (#15980)
7ef36ebb5d is described below

commit 7ef36ebb5d056320676faede712f2052d92f7a5d
Author: Eric Lunderberg 
AuthorDate: Wed Oct 25 15:50:35 2023 -0500

[Unity] Support symbolic PrimValue arguments (#15980)

Prior this this commit, all symbolic variables needed to be defined
either by tensor shapes, or by an explicit `tvm.runtime.ShapeTuple`
argument.  This commit allows arguments `arg: R.Prim(value="n")` to
serve as a source of definition for symbolic variables.
---
 src/relax/backend/vm/codegen_vm.cc |   7 +-
 src/relax/backend/vm/vm_shape_lower.cc | 158 +++--
 src/runtime/ndarray.cc |   4 +-
 src/runtime/relax_vm/builtin.cc|  88 ++
 tests/python/relax/test_vm_build.py| 120 +
 5 files changed, 325 insertions(+), 52 deletions(-)

diff --git a/src/relax/backend/vm/codegen_vm.cc 
b/src/relax/backend/vm/codegen_vm.cc
index caee0a0c13..64b87c6c12 100644
--- a/src/relax/backend/vm/codegen_vm.cc
+++ b/src/relax/backend/vm/codegen_vm.cc
@@ -246,10 +246,11 @@ class CodeGenVM : public 
ExprFunctor {
   Instruction::Arg VisitExpr_(const PrimValueNode* op) final {
 if (auto* int_imm = op->value.as()) {
   return builder_->ConvertConstant(int_imm->value);
-} else {
-  auto* float_imm = op->value.as();
-  ICHECK(float_imm) << "PrimValue can only be IntImm/FloatImm for now";
+} else if (auto* float_imm = op->value.as()) {
   return builder_->ConvertConstant(float_imm->value);
+} else {
+  LOG(FATAL) << "PrimValue should only contain constant after  
VMShapeLower, "
+ << "but received " << GetRef(op) << " with type " << 
op->value->GetTypeKey();
 }
   }
 
diff --git a/src/relax/backend/vm/vm_shape_lower.cc 
b/src/relax/backend/vm/vm_shape_lower.cc
index 8b8eb33f5b..41b27ea625 100644
--- a/src/relax/backend/vm/vm_shape_lower.cc
+++ b/src/relax/backend/vm/vm_shape_lower.cc
@@ -347,6 +347,41 @@ class VMShapeLowerMutator
 return GetRef(op);
   }
 
+  std::pair MakeSymbolicShapeArg(const PrimExpr& expr) {
+using runtime::relax_vm::MakeShapeCode;
+
+if (auto* int_expr = expr.as()) {
+  return {PrimValue::Int64(static_cast(MakeShapeCode::kUseImm)),
+  PrimValue::Int64(int_expr->value)};
+} else {
+  auto it = slot_map_.find(expr);
+  ICHECK(it != slot_map_.end());
+  auto* slot = it->second;
+  ICHECK(slot->value_computed) << "PrimExpr " << expr << " has not been 
computed";
+  return {PrimValue::Int64(static_cast(MakeShapeCode::kLoadShape)),
+  PrimValue::Int64(slot->index)};
+}
+  }
+
+  Expr VisitExpr_(const PrimValueNode* op) final {
+using runtime::relax_vm::MakeShapeCode;
+// Constant shape can be preserved.
+bool is_const_value =
+op->value->IsInstance() || 
op->value->IsInstance();
+if (is_const_value) {
+  return GetRef(op);
+}
+
+Array args = {shape_heap_};
+auto [code, value_or_index] = MakeSymbolicShapeArg(op->value);
+args.push_back(code);
+args.push_back(value_or_index);
+
+// make_shape(heap, n, c[0], r[0], c[1], r[1] ..., c[n], r[n])
+Call call(builtin_make_prim_value_, args, Attrs(), 
{Downcast(op->struct_info_)});
+return call;
+  }
+
   Expr VisitExpr_(const ShapeExprNode* op) final {
 using runtime::relax_vm::MakeShapeCode;
 // Constant shape can be preserved.
@@ -359,17 +394,9 @@ class VMShapeLowerMutator
 
 Array args = {shape_heap_, 
PrimValue::Int64(static_cast(op->values.size()))};
 for (PrimExpr expr : op->values) {
-  if (auto* int_expr = expr.as()) {
-
args.push_back(PrimValue::Int64(static_cast(MakeShapeCode::kUseImm)));
-args.push_back(PrimValue::Int64(int_expr->value));
-  } else {
-auto it = slot_map_.find(expr);
-ICHECK(it != slot_map_.end());
-auto* slot = it->second;
-ICHECK(slot->value_computed) << "PrimExpr " << expr << " has not been 
computed";
-
args.push_back(PrimValue::Int64(static_cast(MakeShapeCode::kLoadShape)));
-args.push_back(PrimValue::Int64(slot->index));
-  }
+  auto [code, value_or_index] = MakeSymbolicShapeArg(expr);
+  args.push_back(code);
+  args.push_back(value_or_index);
 }
 
 // make_shape(heap, n, c[0], r[0], c[1], r[1] ..., c[n], r[n])
@@ -402,6 +429,45 @@ class VMShapeLowerMutator
   // Place this pass as last pass before codegen.
   StructInfo VisitExprDepStructInfoField(const StructInfo& sinfo) final { 
return sinfo; }
 
+  /* \brief Internal utility function used for RunMatch()
+   *
+   * \param expr The expression to be matched
+   *
+   * \param re

Re: [PR] [Unity] Support symbolic PrimValue arguments [tvm]

2023-10-25 Thread via GitHub


masahi merged PR #15980:
URL: https://github.com/apache/tvm/pull/15980


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub


tqchen commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1780006889

   ah i see, one way to get around is define the callback as a global test 
function and call that with call_packed. e.g. `test.vm.assert_notnull`
   
   
https://github.com/apache/tvm/blob/unity/tests/python/relax/test_vm_codegen_only.py#L140C1-L141C1
   
   https://github.com/apache/tvm/blob/unity/python/tvm/relax/testing/vm.py
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


slyubomirsky commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372276835


##
python/tvm/relax/op/base.py:
##
@@ -97,7 +97,11 @@ def call_tir(
 ret: Call
 A call node for the call_tir operator.
 """
-if isinstance(args, Expr) and not isinstance(args, RxTuple):  # type: 
ignore
+if (
+isinstance(args, Expr)
+and not isinstance(args, RxTuple)
+and not isinstance(args.struct_info_, TupleStructInfo)

Review Comment:
   You should do this for the other call_tir variants in this file too.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub


slyubomirsky closed pull request #15971: [Unity][Op] Allow the argument to 
`call_tir` to be a var bound to a tuple, not a tuple literal
URL: https://github.com/apache/tvm/pull/15971


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub


slyubomirsky commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779980410

   Didn't see it, yep, they're duplicates. There is one case that the other PR 
misses so hopefully that can be updated.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


slyubomirsky commented on code in PR #15916:
URL: https://github.com/apache/tvm/pull/15916#discussion_r1372273991


##
src/relax/transform/call_tir_rewrite.cc:
##
@@ -111,41 +111,69 @@ class CallTIRMutator : public ExprMutator {
<< expr->struct_info_;
   }
 
-  Array args;
-  if (call->args[1].as()) {
-args = Downcast(call->args[1])->fields;
-// for call_tir_inplace, don't reinsert in-place args, only the newly 
allocated ones
-if (!is_inplace) {
-  args.insert(args.end(), outs.begin(), outs.end());
-} else {
-  for (size_t i = 0; i < outs.size(); i++) {
-if (inplace_attrs->inplace_indices[i].IntValue() == -1) {
-  args.push_back(outs[i]);
-}
+  Expr callee = call->args[0];
+  Expr arg_tuple = call->args[1];
+  Optional shape_tuple_of_tir_args = NullOpt;
+  if (call->args.size() > 2) {
+shape_tuple_of_tir_args = call->args[2];
+  }
+
+  while (true) {
+auto as_var = arg_tuple.as();
+if (!as_var) break;
+
+auto bound_expr = LookupBinding(as_var.value());
+if (!bound_expr) break;
+
+arg_tuple = bound_expr.value();
+  }
+
+  Array args = [&]() {
+if (auto ptr = arg_tuple.as()) {
+  return ptr->fields;
+} else if (auto ptr = 
arg_tuple->struct_info_.as()) {
+  size_t n_args = ptr->fields.size();
+  Array args;
+  for (size_t i = 0; i < n_args; i++) {
+args.push_back(TupleGetItem(arg_tuple, i));
   }
+  return args;
+} else {
+  LOG(FATAL) << "Lowering of " << call
+ << " requires knowing how many arguments are passed to 
the function.  "
+ << "However, the tuple of arguments " << arg_tuple
+ << " is not itself a tuple, "
+ << "nor does its struct info " << GetStructInfo(arg_tuple)
+ << " define the number of arguments.";
 }
+  }();

Review Comment:
   Not that I particularly mind it, but is it preferable to use this 
construction with a lambda as opposed to just assigning `args` in different 
branches?
   ```c++
   Array args;
   if (case1) {
   args = ...
   } else if (case2) {
   args = ...
   }
   // etc.
   ```
   Does it avoid an allocation or something?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Handle relax.Var as call_tir args when lowering [tvm]

2023-10-25 Thread via GitHub


slyubomirsky commented on PR #15916:
URL: https://github.com/apache/tvm/pull/15916#issuecomment-1779975180

   I think you may need to update the StructInfo inference for 
`call_tir_inplace` like in #15971, since (without modification) that assumes 
the argument is a tuple literal. The test cases here don't try that case, hence 
why that doesn't result in an error (I'd recommend adding a test case).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch dependabot/pip/apps/microtvm/werkzeug-3.0.1 created (now 48d7096e14)

2023-10-25 Thread github-bot
This is an automated email from the ASF dual-hosted git repository.

github-bot pushed a change to branch dependabot/pip/apps/microtvm/werkzeug-3.0.1
in repository https://gitbox.apache.org/repos/asf/tvm.git


  at 48d7096e14 Bump werkzeug from 2.2.3 to 3.0.1 in /apps/microtvm

No new revisions were added by this update.



[PR] Bump werkzeug from 2.2.3 to 3.0.1 in /apps/microtvm [tvm]

2023-10-25 Thread via GitHub


dependabot[bot] opened a new pull request, #15982:
URL: https://github.com/apache/tvm/pull/15982

   Bumps [werkzeug](https://github.com/pallets/werkzeug) from 2.2.3 to 3.0.1.
   
   Release notes
   Sourced from https://github.com/pallets/werkzeug/releases";>werkzeug's 
releases.
   
   3.0.1
   This is a security release for the 3.0.x feature branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-1";>https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-1
   
   3.0.0
   This is a feature release, which includes new features, removes 
previously deprecated code, and adds new deprecations. The 3.0.x branch is now 
the supported fix branch, the 2.3.x branch will become a tag marking the end of 
support for that branch. We encourage everyone to upgrade, and to use a tool 
such as https://pypi.org/project/pip-tools/";>pip-tools to pin all 
dependencies and control upgrades. Test with warnings treated as errors to be 
able to adapt to deprecation warnings early.
   
   Changes: https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-0";>https://werkzeug.palletsprojects.com/en/3.0.x/changes/#version-3-0-0
   Milestone: https://github.com/pallets/werkzeug/milestone/21?closed=1";>https://github.com/pallets/werkzeug/milestone/21?closed=1
   
   2.3.7
   This is a fix release for the 2.3.x feature branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-7";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-7
   Milestone: https://github.com/pallets/werkzeug/milestone/33?closed=1";>https://github.com/pallets/werkzeug/milestone/33?closed=1
   
   2.3.6
   This is a fix release for the 2.3.x feature branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-6";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-6
   Milestone: https://github.com/pallets/werkzeug/milestone/32?closed=1";>https://github.com/pallets/werkzeug/milestone/32?closed=1
   
   2.3.5
   This is a fix release for the 2.3.x feature branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-5";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-5
   Milestone: https://github.com/pallets/werkzeug/milestone/31?closed=1";>https://github.com/pallets/werkzeug/milestone/31?closed=1
   
   2.3.4
   This is a fix release for the 2.3.x release branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-4";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-4
   Milestone: https://github.com/pallets/werkzeug/milestone/30?closed=1";>https://github.com/pallets/werkzeug/milestone/30?closed=1
   
   2.3.3
   This is a fix release for the 2.3.x release branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-3";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-3
   Milestone: https://github.com/pallets/werkzeug/milestone/29?closed=1";>https://github.com/pallets/werkzeug/milestone/29?closed=1
   
   2.3.2
   This is a fix release for the 2.3.x release branch.
   
   Changes: https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-2";>https://werkzeug.palletsprojects.com/en/2.3.x/changes/#version-2-3-2
   Milestone: https://github.com/pallets/werkzeug/milestone/28?closed=1";>https://github.com/pallets/werkzeug/milestone/28?closed=1
   
   2.3.1
   This is a fix release for the 2.3.x release branch.
   
   
   ... (truncated)
   
   
   Changelog
   Sourced from https://github.com/pallets/werkzeug/blob/main/CHANGES.rst";>werkzeug's 
changelog.
   
   Version 3.0.1
   Released 2023-10-24
   
   Fix slow multipart parsing for large parts potentially enabling DoS
   attacks. :cwe:CWE-407
   
   Version 3.0.0
   Released 2023-09-30
   
   Remove previously deprecated code. :pr:2768
   Deprecate the __version__ attribute. Use feature detection, 
or
   importlib.metadata.version("werkzeug"), instead. 
:issue:2770
   generate_password_hash uses scrypt by default. 
:issue:2769
   Add the "werkzeug.profiler" item to the  WSGI 
environ dictionary
   passed to ProfilerMiddleware's filename_format 
function. It contains
   the elapsed and time values for the profiled 
request. :issue:2775
   Explicitly marked the PathConverter as non path isolating. 
:pr:2784
   
   Version 2.3.8
   Unreleased
   Version 2.3.7
   Released 2023-08-14
   
   Use flit_core instead of setuptools as build 
backend.
   Fix parsing of multipart bodies. :issue:2734 Adjust index 
of last newline
   in data start. :issue:2761
   Parsing ints from header values strips spacing first. 
:issue:2734
   Fix empty file streaming when testing. :issue:2740
   Clearer error message when URL rule does not start with slash. 
:pr:2750
   Accept q value can be a float without a 
decimal part. :issue:2751
   
   Version 2.3.6
   Released 2023-06-08
   
   FileStorage.content_length does not fail if the for

[tvm] branch unity updated: [Unity][BYOC] CoreML Scaffolding (#15556)

2023-10-25 Thread wuwei
This is an automated email from the ASF dual-hosted git repository.

wuwei pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 5808cea9af [Unity][BYOC] CoreML Scaffolding (#15556)
5808cea9af is described below

commit 5808cea9af4a6243f20fef8cd1e342e839a7f307
Author: Sunghyun Park 
AuthorDate: Wed Oct 25 10:27:22 2023 -0700

[Unity][BYOC] CoreML Scaffolding (#15556)

* scaffolding

* add comments

* wip

* refactor merge_composite to append the codegen name. This is helpful to 
analyze the profiling results

* create tmp folder when it does not exist

* remove debug codes

* lint

* lint

* fix

* fix ci

* lint

* hide coreml dependency for ci

* lint

* lint

* ci bugfix

* fix
---
 python/tvm/contrib/coreml_runtime.py |   1 +
 python/tvm/relax/backend/contrib/coreml.py   | 490 +++
 src/runtime/contrib/coreml/coreml_runtime.mm |   2 +-
 tests/python/relax/test_codegen_coreml.py| 294 
 4 files changed, 786 insertions(+), 1 deletion(-)

diff --git a/python/tvm/contrib/coreml_runtime.py 
b/python/tvm/contrib/coreml_runtime.py
index b272ed..aa4f212799 100644
--- a/python/tvm/contrib/coreml_runtime.py
+++ b/python/tvm/contrib/coreml_runtime.py
@@ -42,6 +42,7 @@ def create(symbol, compiled_model_path, device):
 fcreate = device._rpc_sess.get_function(runtime_func)
 else:
 fcreate = tvm._ffi.get_global_func(runtime_func)
+assert fcreate, "Cannot find `tvm.coreml_runtime.create` function."
 
 return CoreMLModule(fcreate(symbol, compiled_model_path))
 
diff --git a/python/tvm/relax/backend/contrib/coreml.py 
b/python/tvm/relax/backend/contrib/coreml.py
new file mode 100644
index 00..b5caa688f2
--- /dev/null
+++ b/python/tvm/relax/backend/contrib/coreml.py
@@ -0,0 +1,490 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=invalid-name, unused-argument, import-outside-toplevel
+"""Pattern table and codegen for CoreML"""
+
+import os
+import shutil
+import tvm._ffi
+from tvm.contrib import coreml_runtime
+from tvm.contrib.xcode import compile_coreml
+
+import tvm
+from tvm.relax import transform
+from tvm.relax.struct_info import TensorStructInfo, PrimStructInfo
+from tvm.relax.expr import (
+BindingBlock,
+Call,
+Function,
+PrimValue,
+SeqExpr,
+Var,
+VarBinding,
+Constant,
+)
+from tvm.relax.dpl.pattern import is_op, wildcard
+from tvm.relax.transform import PatternCheckContext
+from ..pattern_registry import get_patterns_with_prefix, register_patterns
+from ..patterns import make_matmul_pattern
+from ...expr_functor import PyExprVisitor, visitor
+
+
+def _check_default(context: PatternCheckContext) -> bool:
+return True
+
+
+def default_binary_patterns(op_name: str):
+"""
+Returns a list of binary op patterns in coreML BYOC backend.
+"""
+
+def _make_binary_pattern():
+lhs = wildcard()
+rhs = wildcard()
+out = is_op("relax." + op_name)(lhs, rhs)
+annotations = {"lhs": lhs, "rhs": rhs, "root": out}
+return out, annotations
+
+def _binary_pattern(pattern_name):
+return (pattern_name, *_make_binary_pattern(), _check_default)
+
+return [_binary_pattern("coreml." + op_name)]
+
+
+def default_unary_patterns(op_name: str):
+"""
+Returns a list of unary op patterns in coreML BYOC backend.
+"""
+
+def _make_unary_pattern():
+lhs = wildcard()
+out = is_op("relax." + op_name)(lhs)
+annotations = {"lhs": lhs, "root": out}
+return out, annotations
+
+def _unary_pattern(pattern_name):
+return (pattern_name, *_make_unary_pattern(), _check_default)
+
+return [_unary_pattern("coreml." + op_name)]
+
+
+def conv2d_patterns():
+"""
+Returns a list of conv2d patterns in coreML BYOC backend.
+"""
+
+def _make_conv2d_pattern():
+lhs = wildcard()
+rhs = wildcard()
+out = is_op("relax.nn.conv2d")(lhs, rhs)
+annotations = {"lhs"

Re: [PR] [Unity][BYOC] CoreML Scaffolding [tvm]

2023-10-25 Thread via GitHub


vinx13 merged PR #15556:
URL: https://github.com/apache/tvm/pull/15556


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub


JackWeiw commented on code in PR #15961:
URL: https://github.com/apache/tvm/pull/15961#discussion_r1372017992


##
src/tir/transforms/inject_ptx_async_copy.cc:
##
@@ -113,9 +116,11 @@ class PTXAsyncCopyInjector : public StmtMutator {
 return PrimExpr();
   }();
   if (src_offset.defined() && dst_offset.defined()) {

Review Comment:
   [here](https://gist.github.com/JackWeiw/5b80956ab44c0f63d4f434f18f42cc89) is 
the script to reproduce dtype mismatch in InjectPTXAsyncCopy PASS.
   Sorry i missed this suggestion in ealier days. It could be a little hard to 
add one unittest for this related circumstance.
   I can't give a conclusion for this related circumstance. Could u give some 
advice on how to add unittest for this related circumstance? 



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Unity][UnitTest] Cleanup test_vm_build.py [tvm]

2023-10-25 Thread via GitHub


Lunderberg opened a new pull request, #15981:
URL: https://github.com/apache/tvm/pull/15981

   - Removed unused `import os`
   
   - Used `tvm.testing.main()` inside `if __name__=="__main__"`
   
   - Added parametrized fixture `exec_mode` instead of marking all tests.
   
   - Replace `@pytest.mark.xfail` with `with pytest.raises`, based on intended 
test behavior in comments.  The `@pytest.mark.xfail` allows a test to fail, and 
is intended for marking known bugs or not-yet-implemented functionality.  `with 
pytest.raises` requires that the block throw an instead of 
`@pytest.mark.xfail`, and is intended for validating that error-checking paths 
function correctly.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Unity] Support symbolic PrimValue arguments [tvm]

2023-10-25 Thread via GitHub


Lunderberg opened a new pull request, #15980:
URL: https://github.com/apache/tvm/pull/15980

   Prior this this commit, all symbolic variables needed to be defined either 
by tensor shapes, or by an explicit `tvm.runtime.ShapeTuple` argument.  This 
commit allows arguments `arg: R.Prim(value="n")` to serve as a source of 
definition for symbolic variables.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-25 Thread via GitHub


lhutton1 commented on PR #104:
URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1779468180

   Regarding the changes required to support scalability in the data type, I've 
been prototyping adding a new `scalable_` attribute to `DataType` that wraps 
`DLDataType`.
   
   However, I've ran into what I believe is an issue when accessing data types 
at compile-time across the FFI boundary between python and c++. `TVMArgValue` 
and `TVMRetValue` may have a value stored as a `DLDataType`. Storing a scalable 
`DataType` as a `DLDataType` will mean that we lose information about the 
scalability (assuming we don't want to alter DLPack, or use the negative lanes 
< -1 approach). For the limited number of test cases I've written, I've worked 
around this limitation by forcing `DataType` to be stored as a string across 
the boundary. But this feels a bit wrong.
   
   I wonder if there could be something I've missed here or if there are any 
other suggestions? Are there any rules for using `string`, `DataType` and 
`DLDataType` interchangeably?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [BugFix][TIR] fix error in symbolic floormod [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15961:
URL: https://github.com/apache/tvm/pull/15961#issuecomment-1779454823

   It looks like this PR isn't unity-specific.  Can the PR be applied to the 
`main` branch instead, so we get the bugfix on both branches?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub


quic-sanirudh commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779451578

   > This may be a duplicate of #15916, which also resolves the analogous 
problem for `FuseOps`, `RewriteDataflowReshape`, and `FoldConstant`.
   
   Oh nice, this is great, thanks for this. I was running to issues in the some 
of the same passes when testing locally, and so I was starting to work on a fix 
to follow up this PR, but looks like they're all fixed in #15916, that saves me 
a bunch of time 😄, so thanks again for that.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Add missing backtick [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15968:
URL: https://github.com/apache/tvm/pull/15968#issuecomment-1779442186

   Looks like the CI failures are due to a check that the PR body is non-empty. 
 Can you add a description to the PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Hotfix] Mark python-FFI handling with TVM_DLL [tvm]

2023-10-25 Thread via GitHub


Lunderberg merged PR #15970:
URL: https://github.com/apache/tvm/pull/15970


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [Hotfix] Mark python-FFI handling with TVM_DLL (#15970)

2023-10-25 Thread lunderberg
This is an automated email from the ASF dual-hosted git repository.

lunderberg pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new de56d8c950 [Hotfix] Mark python-FFI handling with TVM_DLL (#15970)
de56d8c950 is described below

commit de56d8c9508417d85d413b7f3f7b3c9bd04f
Author: Eric Lunderberg 
AuthorDate: Wed Oct 25 09:26:45 2023 -0500

[Hotfix] Mark python-FFI handling with TVM_DLL (#15970)

* [Hotfix] Mark python-FFI handling with TVM_DLL

Bugfix for builds on Windows.

* Updated declarations to match other usage in c_runtime_api.h
---
 include/tvm/runtime/c_runtime_api.h | 48 +
 src/runtime/c_runtime_api.cc| 14 +--
 2 files changed, 55 insertions(+), 7 deletions(-)

diff --git a/include/tvm/runtime/c_runtime_api.h 
b/include/tvm/runtime/c_runtime_api.h
index d678003ee8..b7474cbbae 100644
--- a/include/tvm/runtime/c_runtime_api.h
+++ b/include/tvm/runtime/c_runtime_api.h
@@ -251,6 +251,18 @@ TVM_DLL void TVMAPISetLastError(const char* msg);
  */
 TVM_DLL void TVMAPISetLastPythonError(void* py_object);
 
+/*! \brief Return the previous python error, if any.
+ *
+ * Used to propagate the original Python exception to a python
+ * try/except, when there are C++ stack frames between the location thro
+ *
+ * \return The previous argument passed during the most recent call to
+ * TVMAPISetLastPythonError.  If TVMAPISetLastPythonError has not
+ * been called, or if TVMDropLastPythonError has been called since
+ * the most recent to TVMAPISetLastPythonError, returns nullptr.
+ */
+TVM_DLL void* TVMGetLastPythonError();
+
 /*!
  * \brief return str message of the last error
  *  all function in this file will return 0 when success
@@ -261,6 +273,42 @@ TVM_DLL void TVMAPISetLastPythonError(void* py_object);
  *  \return error info
  */
 TVM_DLL const char* TVMGetLastError(void);
+
+/*!
+ * \brief Return the backtrace of the most recent error
+ *
+ * Returns the backtrace of the most recent error, if an error exists,
+ * and the error contains a backtrace.  If no error exists or the
+ * error does not contain a backtrace, returns nullptr.
+ *
+ *  \return The backtrace of the most recent error
+ */
+TVM_DLL const char* TVMGetLastBacktrace();
+
+/*!
+ * \brief Remove the propagated python error, if any
+ *
+ * Removes the TVM-held reference to a thrown python exception object.
+ * Because these objects contain references to the stack frames from
+ * which the exception was thrown, maintaining a reference to an
+ * exception object prevents any local python variables from being
+ * garbage-collected.  After retrieving the object using
+ * TVMGetLastPythonError, the Python FFI interface uses this method to
+ * clear the TVM-held reference to the exception, to allow garbage
+ * collection to continue.
+ */
+TVM_DLL void TVMDropLastPythonError();
+
+/*! \brief Re-throw the most recent error.
+ *
+ * If an error was previously set using TVMAPISetLastError or
+ * TVMAPISetLastPythonError, re-throw the error.  This is similar to
+ * `LOG(FATAL) << TVMGetLastError()`, but includes handling to
+ * propagate a python exception across C++ stack frames, or to append
+ * a stack trace to an error message.
+ */
+TVM_DLL void TVMThrowLastError();
+
 /*!
  * \brief Load module from file.
  * \param file_name The file name to load the module from.
diff --git a/src/runtime/c_runtime_api.cc b/src/runtime/c_runtime_api.cc
index 9f2ea8e2ff..0881eaf704 100644
--- a/src/runtime/c_runtime_api.cc
+++ b/src/runtime/c_runtime_api.cc
@@ -418,7 +418,7 @@ const char* TVMGetLastError() {
   }
 }
 
-extern "C" void* TVMGetLastPythonError() {
+void* TVMGetLastPythonError() {
   auto& last_error = TVMAPIRuntimeStore::Get()->last_error;
   if (auto* wrapped = std::get_if(&last_error)) {
 return wrapped->obj.raw_pointer();
@@ -427,7 +427,7 @@ extern "C" void* TVMGetLastPythonError() {
   }
 }
 
-extern "C" const char* TVMGetLastBacktrace() {
+const char* TVMGetLastBacktrace() {
   const auto& last_error = TVMAPIRuntimeStore::Get()->last_error;
   if (const auto* wrapped = std::get_if(&last_error)) {
 return wrapped->cpp_backtrace.data();
@@ -438,7 +438,7 @@ extern "C" const char* TVMGetLastBacktrace() {
   }
 }
 
-extern "C" void TVMDropLastPythonError() {
+void TVMDropLastPythonError() {
   auto& last_error = TVMAPIRuntimeStore::Get()->last_error;
   if (std::get_if(&last_error)) {
 last_error = "";
@@ -458,12 +458,12 @@ int TVMAPIHandleException(const std::exception& e) {
   return -1;
 }
 
-extern "C" void TVMAPISetLastPythonError(void* obj) {
+void TVMAPISetLastPythonError(void* obj) {
   auto& last_error = TVMAPIRuntimeStore::Get()->last_error;
   last_error = WrappedPythonError(WrappedPythonObject(obj));
 }
 
-void ThrowLastError() {
+void TVMThrowLastError() {
   auto& last_error = TVMAPIRuntimeStore::Get()->last_error;

Re: [PR] [Unity][Op] Allow the argument to `call_tir` to be a var bound to a tuple, not a tuple literal [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15971:
URL: https://github.com/apache/tvm/pull/15971#issuecomment-1779402991

   This may be a duplicate of https://github.com/apache/tvm/pull/15916, which 
also resolves the analogous problem for `FuseOps`, `RewriteDataflowReshape`, 
and `FoldConstant`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395939

   Current CI failures were present in unity head, but should be resolved after 
PR#15941.  (See [this 
comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for 
details.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub


github-actions[bot] commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779395462

   Failed to re-run CI in https://github.com/apache/tvm/actions/runs/6641806894
   
   
   
   ```
   Traceback (most recent call last):
 File "ci/scripts/github/github_tvmbot.py", line 594, in comment_failure
   raise item
 File "ci/scripts/github/github_tvmbot.py", line 700, in run
   pr.rerun_jenkins_ci()
 File "ci/scripts/github/github_tvmbot.py", line 553, in rerun_jenkins_ci
   post(url, auth=("tvm-bot", TVM_BOT_JENKINS_TOKEN))
 File "/home/runner/work/tvm/tvm/ci/scripts/jenkins/git_utils.py", line 53, 
in post
   with request.urlopen(req, data) as response:
 File "/usr/lib/python3.8/urllib/request.py", line 222, in urlopen
   return opener.open(url, data, timeout)
 File "/usr/lib/python3.8/urllib/request.py", line 531, in open
   response = meth(req, response)
 File "/usr/lib/python3.8/urllib/request.py", line 640, in http_response
   response = self.parent.error(
 File "/usr/lib/python3.8/urllib/request.py", line 569, in error
   return self._call_chain(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 502, in _call_chain
   result = func(*args)
 File "/usr/lib/python3.8/urllib/request.py", line 649, in 
http_error_default
   raise HTTPError(req.full_url, code, msg, hdrs, fp)
   urllib.error.HTTPError: HTTP Error 500: Server Error
   
   ```
   
   with response
   
   ```
   
 
 
   
   
   
   Jenkins [Jenkins]src="/static/bb039fcf/scripts/yui/connection/connection-min.js"> 
 >src="/static/bb039fcf/scripts/yui/datasource/datasource-min.js"> 
 >src="/static/bb039fcf/scripts/yui/autocomplete/autocomplete-min.js"> src="/static/bb039fcf/scripts/yui/menu/menu-min.js">src="/static/bb039fcf/scripts/yui/element/element-min.js">src="/static/bb039fcf/scripts/yui/button/button-min.js">src="/static/bb039fcf/scripts/yui/storage/storage-min.js">src="/static/bb039fcf/scripts/hudson-behavior.js" 
 >type="text/javascript">src="/static/bb039fcf/scripts/sortable.js" 
 >type="text/javascript">href="/static/bb039fcf/scripts/yui/container/assets/container.css" 
 >type="text/css">href="/static/bb039fcf/scripts/yui/container/assets/skins/sam/container.css" 
 >type="text/css">Skip to contentJenkinshttp://www.w3.org/2000/svg"; class="" viewBox="0 0 512 
512">https://www.jenkins.io/redirect/search-box"; 
class="main-search__icon-trailing">http://www.w3.org/2000/svg"; viewBox="0 0 512 512">log
 inDashboard Oops!A pr
 oblem occurred while processing the request.Logging 
ID=12d4dfbc-7b4b-4b5b-a66f-c86490e2ec3bREST APIhttps://www.jenkins.io/"; target="_blank">Jenkins 
2.361.2
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779394478

   @tvm-bot re-run


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][BYOC] Fix cuBLAS BYOC compatibilty with Disco + `ThreadedSession` [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15977:
URL: https://github.com/apache/tvm/pull/15977#issuecomment-1779387049

   @ci-bot re-run


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][UnitTest] Enable BindParams test for R.Prim [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15978:
URL: https://github.com/apache/tvm/pull/15978#issuecomment-1779386185

   Rebased onto head to re-run CI.  Previous failures were present in unity 
head, and have been resolved.  (See [this 
comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for 
details.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity] Include LegalizeOps in the default relax.build lowering flow [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15864:
URL: https://github.com/apache/tvm/pull/15864#issuecomment-1779381464

   Sounds good, and thank you!
   
   I'm rebasing the commit on top of unity head, as the current CI failures are 
due to a breakage on `unity` head, and are resolved with 
https://github.com/apache/tvm/pull/15941. (See [this 
comment](https://github.com/apache/tvm/pull/15941#issuecomment-1779377445) for 
details.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch unity updated: [Unity][Transform] Improved canonicalization of non-dataflow Var (#15941)

2023-10-25 Thread lunderberg
This is an automated email from the ASF dual-hosted git repository.

lunderberg pushed a commit to branch unity
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/unity by this push:
 new 4d19c8ab1f [Unity][Transform] Improved canonicalization of 
non-dataflow Var (#15941)
4d19c8ab1f is described below

commit 4d19c8ab1f2dd03b1cd6f7eff10eba020867e4b4
Author: Eric Lunderberg 
AuthorDate: Wed Oct 25 09:13:33 2023 -0500

[Unity][Transform] Improved canonicalization of non-dataflow Var (#15941)

* [Unity][Transform] Improved canonicalization of non-dataflow Var

Prior to this commit, `relax.transform.CanonicalizeBindings` removed
trivial bindings `var_y = var_x` where a `var_y: relax.DataflowVar`
and `var_x: relax.Var`, but did not remove trivial bindings when
`var_y: relax.Var` and `var_x: relax.DataflowVar`.  This was to avoid
invalid use of a `relax.DataflowVar` outside of a dataflow block.

This commit updates `CanonicalizeBindings` to handle this type of
binding as well.  To ensure that no `relax.DataflowVar` instances are
used outside of a dataflow block, this is done by replacing `var_y:
relax.DataflowVar` at its point of definition, instead of replacing
`var_x: relax.Var` at its point of use.

This commit also canonicalizes `relax.Var` definitions to
`relax.DataflowVar`, if the binding occurs within a dataflow block,
and the variable is never used outside of a dataflow block.

* Simplify unwrapping of known bindings

* Updated to use Map, to avoid while(true) loops
---
 src/relax/transform/canonicalize_bindings.cc   | 247 +++--
 tests/python/relax/test_dataflow_pattern.py|   3 +-
 .../python/relax/test_optimize_layout_transform.py |   9 +-
 .../python/relax/test_remove_redundant_reshape.py  |   3 +-
 .../relax/test_transform_canonicalize_bindings.py  | 244 +++-
 5 files changed, 414 insertions(+), 92 deletions(-)

diff --git a/src/relax/transform/canonicalize_bindings.cc 
b/src/relax/transform/canonicalize_bindings.cc
index 2e7f4311f9..246b38f6f8 100644
--- a/src/relax/transform/canonicalize_bindings.cc
+++ b/src/relax/transform/canonicalize_bindings.cc
@@ -33,68 +33,187 @@
 namespace tvm {
 namespace relax {
 
-class BindingCanonicalizer : public ExprMutator {
+namespace {
+
+struct CanonicalizationPlan {
+  Map replace_usage;
+  Map replace_binding;
+  std::unordered_set bindings_to_remove;
+};
+
+/*! \brief Utility class to identify usage location
+ *
+ * Canonicalization of a variable binding may require information from
+ * later in the function.  For example, replacing `dataflow_x = expr`
+ * with `var_x = expr` to avoid a trivial binding of `var_x =
+ * dataflow_x` later in the function.  This utility examines a relax
+ * expression, and plans the changes to be made in a mutation pass.
+ */
+class CanonicalizePlanner : public ExprVisitor {
  public:
-  BindingCanonicalizer() {}
-
-  using ExprMutator::VisitExpr_;
-
-  Expr VisitExpr_(const TupleGetItemNode* tuple_get_item) override {
-if (auto tuple_var = tuple_get_item->tuple.as()) {
-  if (auto tuple_value = LookupBinding(tuple_var.value())) {
-if (auto explicit_tuple = tuple_value.as()) {
-  CHECK_GE(tuple_get_item->index, 0)
-  << "Tuple " << tuple_value << " is accessed at index " << 
tuple_get_item->index
-  << ", but negative indices are not supported in this context.";
-  CHECK_LT(tuple_get_item->index, explicit_tuple->fields.size())
-  << "Tuple " << tuple_value << " is accessed at index " << 
tuple_get_item->index
-  << ", but the tuple size is only " << 
explicit_tuple->fields.size();
-  return VisitExpr(explicit_tuple->fields[tuple_get_item->index]);
+  static CanonicalizationPlan Collect(const Expr& expr) {
+CanonicalizePlanner visitor;
+visitor.VisitExpr(expr);
+
+CanonicalizationPlan plan;
+
+std::unordered_set handled;
+
+for (const auto& binding_iter : visitor.trivial_bindings_) {
+  Var bound_var = binding_iter.first;
+  Var bound_to = binding_iter.second;
+
+  while (auto opt = visitor.trivial_bindings_.Get(bound_to)) {
+// This may be a trivial binding into a trivial binding.  In
+// that case, unwrap the bindings until we find the earliest
+// non-trivial binding.
+bound_to = opt.value();
+  }
+
+  while (auto opt = plan.replace_binding.Get(bound_to->vid)) {
+// The variable we are binding to may have already been
+// replaced, if it fell into Case 4 (Var = DataflowVar).  In
+// that case, we check against its replacement instead.
+bound_to = opt.value();
+  }
+
+  if (bound_var.as() || !bound_to.as()) {
+// Case 1: Var = Var
+// Case 2: DataflowVar = Var
+// Case 3: DataflowVar = DataflowVar
+//
+ 

Re: [PR] [Unity][Transform] Improved canonicalization of non-dataflow Var [tvm]

2023-10-25 Thread via GitHub


Lunderberg merged PR #15941:
URL: https://github.com/apache/tvm/pull/15941


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Transform] Improved canonicalization of non-dataflow Var [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15941:
URL: https://github.com/apache/tvm/pull/15941#issuecomment-1779377445

   No problem!
   
   I'm going to merge this in, as it resolves a few test failures in unity head 
that resulted from a conflict between 
[PR#15791](https://github.com/apache/tvm/pull/15791) and 
[PR#15904](https://github.com/apache/tvm/pull/15904).  Each PR passed CI on its 
own, but the combination of the two changes caused a few minor test failures.  
Since the test failures were resolved in this PR, that avoids needing to make a 
hotfix PR.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779338988

   One unrelated question, though.  In your pseudocode, you have the signature 
`def test(x: Object, callback)`, but I wasn't able to pass a callback directly 
into a relax function.  I could define a relax function that takes `callback: 
R.Callable([R.Object], R.Prim('bool'))` as an argument, but it ran into [this 
check](https://github.com/apache/tvm/blob/unity/src/relax/backend/vm/exec_builder.cc#L141)
 when running CodegenVM.  I ended up rewriting it to instead use `R.ExternFunc` 
to find the callback, but that requires global state for a local callback.  Is 
there a way to pass the callback directly to a relax function?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity] Ensure one VM register for each relax binding [tvm]

2023-10-25 Thread via GitHub


Lunderberg commented on PR #15855:
URL: https://github.com/apache/tvm/pull/15855#issuecomment-1779329489

   Thank you, @tqchen, and I really like that unit test to validate the desired 
behavior in the VM, and not just how that behavior impacts a use case.  I've 
updated the PR to include the unit test, parametrized to run on both 
`exec_mode='bytecode'` and `exec_mode='compiled'`, and made the corresponding 
updates for the compiled mode.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Unity] [Tracking Issue] Heterogeneous execution for Relax [tvm]

2023-10-25 Thread via GitHub


liquanfeng commented on issue #15101:
URL: https://github.com/apache/tvm/issues/15101#issuecomment-1779026435

   A very useful job! And I try it on 
tests/python/relax/test_codegen_cudnn.py::test_conv2d_offload according to 
tests/python/relax/test_vm_multi_device.py::test_multi_device as shown below
   ```python
   import numpy as np
   
   import tvm
   import tvm.testing
   import tvm.topi.testing
   from tvm import relax
   from tvm.relax.backend.contrib.cudnn import partition_for_cudnn
   from tvm.script import relax as R, ir as I
   
   from tvm.script.ir_builder import IRBuilder
   from tvm.script.ir_builder import relax as relax_builder
   
   data_shape, weight_shape, dtype = (
   (16, 32, 32, 16),
   (32, 3, 3, 16),
   "float32",
   )
   
   input_np = np.random.randn(*data_shape).astype(dtype)
   weight_np = np.random.randn(*weight_shape).astype(dtype)
   
   oc = weight_shape[0]
   bias_np = np.random.randn(1, 1, 1, oc).astype(dtype)
   args = (input_np, weight_np, bias_np)
   
   with IRBuilder() as builder:
   with relax_builder.function():
   R.func_name("main")
   data = R.arg("data", R.Tensor(data_shape, dtype))
   weight = R.arg("weight", R.Tensor(weight_shape, dtype))
   bias = R.arg("bias", R.Tensor((1, 1, 1, weight_shape[0]), dtype))
   
   with R.dataflow() as frame:
   output = R.emit(
   R.nn.conv2d(
   data,
   weight,
   out_dtype=dtype,
   padding=(1, 1),
   data_layout="NHWC",
   kernel_layout="OHWI",
   )
   )
   output = R.emit(output + bias)
   
   output = R.emit(relax.op.to_vdevice(output, I.vdevice("llvm")))
   output = R.emit(R.multiply(output, R.const(2, "float32")))
   R.output(output)
   
   R.func_ret_value(frame.output_vars[0])
   
   func = builder.get()
   mod = tvm.IRModule(
   {"main": func},
   global_infos={
   "vdevice": [
   I.vdevice("cuda", 0),
   I.vdevice("llvm"),
   ]
   },
   )
   
   mod = partition_for_cudnn(mod)
   mod = relax.transform.RunCodegen()(mod)
   
   devs = [tvm.device("cuda", 0), tvm.device("llvm")]
   mod = relax.transform.RealizeVDevice()(mod)
   mod = relax.transform.LegalizeOps()(mod)
   mod = tvm.tir.transform.DefaultGPUSchedule()(mod)
   
   with tvm.transform.PassContext(config={"relax.backend.use_cuda_graph": 
False}):
   ex = relax.build(mod)
   vm = relax.VirtualMachine(ex, devs)
   f = vm["main"]
   inputs = [tvm.nd.array(inp, tvm.device("cuda", 0)) for inp in input_np]
   
   print(f(*inputs).numpy())
   ```
   raise following error
   ```
   Traceback (most recent call last):
 File "/workspace/yongwww/tvm/tests/byoc.py", line 77, in 
   ex = relax.build(mod)
 File "/workspace/yongwww/tvm/python/tvm/relax/vm_build.py", line 334, in 
build
   new_mod = lowering_passes(mod)
 File "/workspace/yongwww/tvm/python/tvm/ir/transform.py", line 238, in 
__call__
   return _ffi_transform_api.RunPass(self, mod)
 File "/workspace/yongwww/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 
239, in __call__
   raise_last_ffi_error()
 File "/workspace/yongwww/tvm/python/tvm/_ffi/base.py", line 476, in 
raise_last_ffi_error
   raise py_err
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 24: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string, 
std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
std::__cxx11::basic_string, std::allocator 
>, tvm::runtime::TVMRetValue)
 23: tvm::transform::Pass::operator()(tvm::IRModule) const
 22: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 21: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 20: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 19: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 18: _ZN3tvm7runtime13PackedFuncObj
 17: tvm::runtime::TypedPackedFunc::AssignTypedLambda(tvm::relax::transform::CallTIRRewrite()::{lambda(tvm::IRModule,
 tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const, 
tvm::runtime::TVMRetValue) const
 16: tvm::relax::CallTIRMutator::Run()
 15: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
 14: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7r
 13: tvm::relax::ExprMutator::VisitExpr_(tvm::relax::FunctionNode const*)
 12: tvm::relax::E

[PR] [Unity] Add fast_pow in FastMathTransform pass [tvm]

2023-10-25 Thread via GitHub


HongHongHongL opened a new pull request, #15979:
URL: https://github.com/apache/tvm/pull/15979

   This PR uses fast_exp to convert power op to fast but approximate 
counterpart.
   
   When y is not an integer, fast_exp(log(x) * y) is much faster than 
topi.power(x, y). However, if y is not a constant, fast_exp(log(x) * y) will 
get a wrong anwser when x < 0 and y is an integer. To fix this problem, this PR 
uses power(x, y) = exp(log(abs(x)) * y) * (log((x / abs(x) - 1) * (ceil(y) - 
floor(y)) + 1) + (x / abs(x) - 1) * (y % 2) + 1).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TVMScript][TIR] Pretty print TIR LLVM function name [tvm]

2023-10-25 Thread via GitHub


ekalda commented on PR #15953:
URL: https://github.com/apache/tvm/pull/15953#issuecomment-1778785343

   Thanks @cbalint13 for the great work on this! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[tvm] branch main updated: [TVMScript][TIR] Pretty print TIR LLVM function name (#15953)

2023-10-25 Thread ekalda
This is an automated email from the ASF dual-hosted git repository.

ekalda pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm.git


The following commit(s) were added to refs/heads/main by this push:
 new 885fc27390 [TVMScript][TIR] Pretty print TIR LLVM function name 
(#15953)
885fc27390 is described below

commit 885fc27390b5d0b902cfc17049363a2c68e2ac80
Author: Balint Cristian 
AuthorDate: Wed Oct 25 11:37:35 2023 +0300

[TVMScript][TIR] Pretty print TIR LLVM function name (#15953)

This allows printing of the LLVM function real name in TIR printer.
Prior to this a counter-intuitive T.int32() value was printed instead of 
the real name.
Changes

Before: T.call_llvm_pure_intrin("int32x4", T.uint32(62), T.uint32(0))
After: T.call_llvm_pure_intrin("int32x4", "llvm.donothing", T.uint32(0))

This is part of #15918 .
---
 python/tvm/tir/op.py   |  9 
 src/script/printer/tir/expr.cc | 25 ++
 tests/python/unittest/test_tir_ops.py  |  7 ++
 .../python/unittest/test_tvmscript_printer_tir.py  |  7 ++
 4 files changed, 43 insertions(+), 5 deletions(-)

diff --git a/python/tvm/tir/op.py b/python/tvm/tir/op.py
index 905d14296d..d7df2a4bb6 100644
--- a/python/tvm/tir/op.py
+++ b/python/tvm/tir/op.py
@@ -16,7 +16,6 @@
 # under the License.
 # pylint: disable=redefined-builtin, invalid-name
 """Operators used in TIR expression."""
-import warnings
 from typing import Any, Optional
 
 import tvm._ffi
@@ -251,7 +250,7 @@ def call_llvm_intrin(dtype, name, *args, span=None):
The name of the llvm intrinsic function.
 
 args : list
-   Poistional arguments.
+   Positional arguments.
 
 span : Optional[Span]
 The location of this operator in the source code.
@@ -271,7 +270,7 @@ def call_llvm_intrin(dtype, name, *args, span=None):
 else:
 llvm_id = name
 if llvm_id == 0:
-warnings.warn(f"Unknown llvm intrinsic function {name}, falling back 
to 0")
+raise ValueError(f"Unknown llvm intrinsic function {name}")
 return call_intrin(
 dtype,
 Op.get("tir.call_llvm_intrin"),
@@ -293,7 +292,7 @@ def call_llvm_pure_intrin(dtype, name, *args, span=None):
The name of the llvm intrinsic function.
 
 args : list
-   Poistional arguments.
+   Positional arguments.
 
 span : Optional[Span]
 The location of this operator in the source code.
@@ -313,7 +312,7 @@ def call_llvm_pure_intrin(dtype, name, *args, span=None):
 else:
 llvm_id = name
 if llvm_id == 0:
-warnings.warn(f"Unknown llvm intrinsic function {name}, falling back 
to 0")
+raise ValueError(f"Unknown llvm intrinsic function {name}")
 return call_intrin(
 dtype,
 Op.get("tir.call_llvm_pure_intrin"),
diff --git a/src/script/printer/tir/expr.cc b/src/script/printer/tir/expr.cc
index 8de142f861..e25b074401 100644
--- a/src/script/printer/tir/expr.cc
+++ b/src/script/printer/tir/expr.cc
@@ -250,6 +250,31 @@ TVM_STATIC_IR_FUNCTOR(IRDocsifier, vtable)
   dtype_print_location =
   
static_cast(dtype_locations[op].IntValue());
 }
+if (name == "call_llvm_pure_intrin" || name == "call_llvm_intrin") {
+  int n_args = call->args.size();
+  int64_t id = call->args[0].as()->value;
+  auto f_llvm_lookup_intrinsic_name =
+  tvm::runtime::Registry::Get("target.llvm_get_intrinsic_name");
+
+  Array args;
+  args.reserve(n_args + 1);
+  if (dtype_print_location == tir::ScriptDtypePrintLocation::kFirst) {
+args.push_back(LiteralDoc::DataType(call->dtype, 
call_p->Attr("dtype")));
+  }
+
+  for (int i = 0; i < n_args; ++i) {
+if ((i == 0) && (f_llvm_lookup_intrinsic_name)) {
+  String name = (*f_llvm_lookup_intrinsic_name)(id);
+  args.push_back(LiteralDoc::Str(name.c_str(), 
call_p->Attr("args")->ArrayIndex(i)));
+} else {
+  args.push_back(d->AsDoc(call->args[i], 
call_p->Attr("args")->ArrayIndex(i)));
+}
+  }
+  if (dtype_print_location == tir::ScriptDtypePrintLocation::kLast) {
+args.push_back(LiteralDoc::DataType(call->dtype, 
call_p->Attr("dtype")));
+  }
+  return prefix->Call(args);
+}
   } else if (call->op.as()) {
 prefix = d->AsDoc(call->op, call_p->Attr("op"));
   } else {
diff --git a/tests/python/unittest/test_tir_ops.py 
b/tests/python/unittest/test_tir_ops.py
index 21981d1f0b..8cffe8171a 100644
--- a/tests/python/unittest/test_tir_ops.py
+++ b/tests/python/unittest/test_tir_ops.py
@@ -234,5 +234,12 @@ def test_comm_reducer(num_args):
 assert tvm.tir.max(*range(num_args)) == num_args - 1
 
 
+def test_llvm_intrin():
+with pytest.raises(ValueError, match=r"Unknown llvm intrinsic function 

Re: [PR] [TVMScript][TIR] Pretty print TIR LLVM function name [tvm]

2023-10-25 Thread via GitHub


ekalda merged PR #15953:
URL: https://github.com/apache/tvm/pull/15953


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org