Re: [PR] [Bugfix][Cutlass] fix cutlass instantiate attention template bugs [tvm]

2024-08-03 Thread via GitHub


tqchen merged PR #17229:
URL: https://github.com/apache/tvm/pull/17229


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] InternalError: Check failed: type_code_ == kTVMObjectHandle expected Object but got int [tvm]

2024-08-03 Thread via GitHub


Cookiee235 commented on issue #17235:
URL: https://github.com/apache/tvm/issues/17235#issuecomment-2266645814

   @Lunderberg Wow, it looks like a tremendous engineering change. Thank you 
for your continuous efforts! :muscle:


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] Segfault error when parse the Relax IR [tvm]

2024-08-03 Thread via GitHub


Cookiee235 opened a new issue, #17239:
URL: https://github.com/apache/tvm/issues/17239

   ### Actual behavior
   
   ```
   [16:49:04] /software/tvm-lunder/src/runtime/logging.cc:390: TVM_LOG_DEBUG 
enables VLOG statements in 'ir/transform.cc' up to level 1
   [16:49:04] /software/tvm-lunder/src/runtime/logging.cc:390: TVM_LOG_DEBUG 
enables VLOG statements in 'relay/ir/transform.cc' up to level 1
   Segmentation fault (core dumped)
   ```
   
   
   ### Steps to reproduce
   
   ```
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @T.prim_func(private=True)
   def multiply_by_two(A: T.Buffer((16,), "float32")):
   for i in range(16):
   A[i] = A[i] * T.float32(2)
   
   @R.function
   def main(A: R.Tensor((16,), dtype="float32")) -> R.Tensor((16,), 
dtype="float32"):
   cls = Module
   args: R.Tuple(R.Tensor((16,), dtype="float32")) = (A,)
   gv1: R.Tensor((16,), dtype="float32") = 
R.call_tir_inplace(cls.multiply_by_two, args, out_sinfo=R.Tensor((16,), 
dtype="float32"), inplace_indices=[0])
   return gv1
   m = Module
   ```
   
   
   cc @Lunderberg @junrushao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [3rdparty] Bump FlashInfer [tvm]

2024-08-03 Thread via GitHub


tqchen merged PR #17236:
URL: https://github.com/apache/tvm/pull/17236


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] target.build.llvm is not enabled [tvm]

2024-08-02 Thread via GitHub


yizhihenpidehou opened a new issue, #17238:
URL: https://github.com/apache/tvm/issues/17238

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   ### Expected behavior
   
   I want to run the code:
   `
   rt_lib = tvm.build(MyModule, target="llvm")
   `
   
   ### Actual behavior
   
   Traceback (most recent call last):
 File 
"/Users/yizhihenpidehou/Desktop/tvm_experiment/lecture2_TensorIR/main.py", line 
40, in 
   rt_lib = tvm.build(MyModule, target="llvm")
 File 
"/Users/yizhihenpidehou/Desktop/fdu/eg/tvm/python/tvm/driver/build_module.py", 
line 297, in build
   rt_mod_host = _driver_ffi.tir_to_runtime(annotated_mods, target_host)
 File 
"/Users/yizhihenpidehou/Desktop/fdu/eg/tvm/python/tvm/_ffi/_ctypes/packed_func.py",
 line 240, in __call__
   raise_last_ffi_error()
 File "/Users/yizhihenpidehou/Desktop/fdu/eg/tvm/python/tvm/_ffi/base.py", 
line 481, in raise_last_ffi_error
   raise py_err
   tvm.error.InternalError: Traceback (most recent call last):
 File "/Users/yizhihenpidehou/Desktop/fdu/eg/tvm/src/target/codegen.cc", 
line 72
   InternalError: Check failed: (bf != nullptr) is false: target.build.llvm is 
not enabled
   
   ### Environment
   
   tvm path: /Users/yizhihenpidehou/Desktop/fdu/eg/tvm/python/tvm/__init__.py
   Operating System: macOS M2 arm;  
   TVM version:0.18.dev0
   https://github.com/user-attachments/assets/c24a2735-3189-43be-abca-4a3e69d41dbe;>
   
   ### Steps to reproduce
   1. build tvm from source code; 
   2. set(USE_LLVM "/opt/homebrew/opt/llvm/bin/llvm-config --link-static") in 
cmake.config
   3. cmake ..  & make
   4. cd /python subfolder in tvm, and python setup.py install, but I got such 
error 
   https://github.com/user-attachments/assets/c301f5ad-3782-4afa-b9df-68a5457f4151;>
   5. follow Building with a Conda Environment follow [conda install 
link](https://tvm.apache.org/docs/install/from_source.html#python-package-installation:~:text=with%20a%20Conda-,Environment,-%C2%B6)
   6. run my code
   
   `
   import sys
   import numpy as np
   import tvm
   from tvm.ir.module import IRModule
   from tvm.script import tir as T
   print("tvm path:",tvm.__file__)
   print("tvm version:",tvm.__version__)
   dtype = "float32"
   a_np = np.random.rand(128, 128).astype(dtype)
   b_np = np.random.rand(128, 128).astype(dtype)
   c_mm_relu = np.maximum(a_np @ b_np, 0)
   
   @tvm.script.ir_module
   class MyModule:
   @T.prim_func
   def mm_relu(A: T.Buffer((128, 128), "float32"),
   B: T.Buffer((128, 128), "float32"),
   C: T.Buffer((128, 128), "float32")):
   T.func_attr({"global_symbol": "mm_relu", "tir.noalias": True})
   Y = T.alloc_buffer((128, 128), dtype="float32")
   for i, j, k in T.grid(128, 128, 128):
   with T.block("Y"):
   # [block_axis] = T.axis.[axis_type]([axis_range], 
[mapped_value])
   vi = T.axis.spatial(128, i)
   vj = T.axis.spatial(128, j)
   vk = T.axis.reduce(128, k)
   with T.init():
   Y[vi, vj] = T.float32(0)
   Y[vi, vj] = Y[vi, vj] + A[vi, vk] * B[vk, vj]
   for i, j in T.grid(128, 128):
   with T.block("C"):
   vi = T.axis.spatial(128, i)
   vj = T.axis.spatial(128, j)
   C[vi, vj] = T.max(Y[vi, vj], T.float32(0))
   
   
   rt_lib = tvm.build(MyModule, target="llvm")
   
   a_nd = tvm.nd.array(a_np)
   b_nd = tvm.nd.array(b_np)
   c_nd = tvm.nd.empty((128, 128), dtype="float32")
   print(type(c_nd))
   
   func_mm_relu = rt_lib["mm_relu"]
   func_mm_relu(a_nd, b_nd, c_nd)
   
   np.testing.assert_allclose(c_mm_relu, c_nd.numpy(), rtol=1e-5)
   
   import tvm
   from tvm import te
   import IPython
   
   A = te.placeholder((128, 128), "float32", name="A")
   B = te.placeholder((128, 128), "float32", name="B")
   k = te.reduce_axis((0, 128), "k")
   Y = te.compute((128, 128), lambda i, j: te.sum(A[i, k] * B[k, j], axis=k), 
name="Y")
   C = te.compute((128, 128), lambda i, j: te.max(Y[i, j], 0), name="C")
   
   te_func = te.create_prim_func([A, B, C]).with_attr({"global_symbol": 
"mm_relu"})
   MyModuleFromTE = tvm.IRModule({"mm_relu": te_func})
   print(IPython.display.Code(MyModuleFromTE.script(), language="python"))
   `
   
   ### Triage
   
   Please refer to the list of label tags 

Re: [PR] [Runtime Patch] Add AbortSignal to fetchWithCache in ArtifactCacheTemplate interface [tvm]

2024-08-02 Thread via GitHub


tqchen merged PR #17233:
URL: https://github.com/apache/tvm/pull/17233


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] InternalError: Check failed: type_code_ == kTVMObjectHandle expected Object but got int [tvm]

2024-08-02 Thread via GitHub


Lunderberg commented on issue #17235:
URL: https://github.com/apache/tvm/issues/17235#issuecomment-2266148722

   Well, I've got good news and bad news.  The good news is that bug can be 
boiled down to even simpler minimal case, and can be resolved by #16183.
   
   ```python
   import tvm
   from tvm.script import ir as I, relax as R
   
   @I.ir_module
   class Module:
   @R.function
   def main():
   return (42,)
   
   built = tvm.relax.build(Module, target="llvm")
   vm = tvm.relax.VirtualMachine(built, tvm.cpu())
   output = vm["main"]()
   
   # With https://github.com/apache/tvm/pull/16183, these asserts pass.
   assert len(output) == 1
   assert isinstance(output[0], int)
   assert output[0] == 42
   ```
   
   The downside is that #16183 is an absolute beast of a PR, touches pretty 
much every single part of TVM, caused and resolved breakage in unit tests 
across the board, and comes with my sincere apologies to code reviewers.  (But 
still a worthwhile change to make.)
   
   The root cause is that there are two distinct ways to represent an integer 
in the TVM.  It can be stored in the `TVMRetValue` type, with the `kDLInt` type 
code, or as part of the `tvm::Object`/`tvm::ObjectRef` hierarchy.  There are 
some parts of the codebase that require a `kDLInt`, such as passing an integer 
to a native func.  There are other parts of the codebase that require an 
`ObjectRef`, such as storing an integer in a `tvm.runtime.Container`.  There 
are some automatic conversions applied in the FFI (e.g. converting from 
`kDLInt` to a `tvm.tir.IntImm`), but extending them would continue a trend of 
using compile-time types as part of the runtime library, and wouldn't be a good 
long-term plan.  In part, PR #16183 is so large because it re-establishes the 
division between the compile-time and runtime types, and then needed to update 
every location that relied on no-longer-present automatic conversions.
   
   I'm going to merge `main` into the #16183 branch, to make sure the CI 
results aren't stale, and will see next week if I can get somebody to tackle 
the code review of it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime] Reorganize PagedKVCache attn kernel invocation [tvm]

2024-08-02 Thread via GitHub


MasterJH5574 commented on PR #17237:
URL: https://github.com/apache/tvm/pull/17237#issuecomment-2266139733

   Depending on #17236.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Runtime] Reorganize PagedKVCache attn kernel invocation [tvm]

2024-08-02 Thread via GitHub


MasterJH5574 opened a new pull request, #17237:
URL: https://github.com/apache/tvm/pull/17237

   This PR reorganizes the attention kernel invocation logic in the 
PagedKVCache, so that in cases of sequence fork, we can effectively merge one 
ragged-prefill kernel and a decode kernel into a single decode kernel.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [3rdparty] Bump FlashInfer [tvm]

2024-08-02 Thread via GitHub


MasterJH5574 opened a new pull request, #17236:
URL: https://github.com/apache/tvm/pull/17236

   This PR bumps FlashInfer and updates PagedKVCache accordingly for 
performance improvement.
   
   Some notes on this bump:
   
   * When the Grouped-Query Attention group size is at least 4 and FlashInfer 
is enabled, we use the prefill attn kernel for better performance.
   * We enlarge the temporary workspace for FlashInfer use accordingly, as 
FlashInfer in the current version may consume much larger workspace. We turn 
off the workspace when FlashInfer is not enabled.
   * We reduce the max block depth to be 1, in observation of the limited help 
of cascade inference when batch size is not large and the prompt reuse is low.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Frontend] Add Sqrt Op [tvm]

2024-08-02 Thread via GitHub


tqchen commented on PR #17228:
URL: https://github.com/apache/tvm/pull/17228#issuecomment-2265742977

   let us mark skip that one then


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Unity][Frontend] Add Sqrt Op [tvm]

2024-08-02 Thread via GitHub


tlopex commented on PR #17228:
URL: https://github.com/apache/tvm/pull/17228#issuecomment-2265682177

   Is this CI error related to Relax? I saw it was caused by 
`tests/python/relay/test_to_mixed_precision.py::test_lstm_float64`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] FuseTransposeMatmul Pass [tvm]

2024-08-02 Thread via GitHub


tqchen merged PR #17234:
URL: https://github.com/apache/tvm/pull/17234


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] InternalError: Check failed: type_code_ == kTVMObjectHandle expected Object but got int [tvm]

2024-08-02 Thread via GitHub


Cookiee235 opened a new issue, #17235:
URL: https://github.com/apache/tvm/issues/17235

   ### Actual behavior
   ```
   Traceback (most recent call last):
 File "/share_container/optfuzz/res/bugs/simple/obj_int.py", line 59, in 

   compile_mod(mod, input_0)
 File "/share_container/optfuzz/res/bugs/simple/obj_int.py", line 56, in 
compile_mod
   mod_outputs = vm['main'](*inputs)
 ^^^
 File "/software/tvm/python/tvm/_ffi/_ctypes/packed_func.py", line 239, in 
__call__
   raise_last_ffi_error()
 File "/software/tvm/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
   tvm.error.InternalError: Traceback (most recent call last):
 13: 
tvm::runtime::PackedFuncObj::Extractor 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
 12: 
tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef
 const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
 11: 
tvm::runtime::PackedFuncObj::Extractor 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
 10: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, 
std::vector > const&)
 9: tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop()
 8: 
tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*,
 tvm::runtime::relax_vm::Instruction)
 7: 
tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef
 const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
 6: 
tvm::runtime::PackedFuncObj::Extractor 
>::Call(tvm::runtime::PackedFuncObj const*, tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
 5: tvm::runtime::relax_vm::VirtualMachineImpl::InvokeBytecode(long, 
std::vector > const&)
 4: tvm::runtime::relax_vm::VirtualMachineImpl::RunLoop()
 3: 
tvm::runtime::relax_vm::VirtualMachineImpl::RunInstrCall(tvm::runtime::relax_vm::VMFrame*,
 tvm::runtime::relax_vm::Instruction)
 2: 
tvm::runtime::relax_vm::VirtualMachineImpl::InvokeClosurePacked(tvm::runtime::ObjectRef
 const&, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
 1: 
tvm::runtime::PackedFuncObj::Extractor >::Call(tvm::runtime::PackedFuncObj const*, 
tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
 0: tvm::runtime::ObjectRef 
tvm::runtime::TVMPODValue_::AsObjectRef() const
 File "/software/tvm/include/tvm/runtime/packed_func.h", line 2080
   InternalError: Check failed: type_code_ == kTVMObjectHandle (0 vs. 8) : 
expected Object but got int
   ```
   
   ### Steps to reproduce
   ```
   import tvm
   from tvm import relax
   import numpy as np
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @T.prim_func(private=True)
   def add1(C: T.Buffer((T.int64(16), T.int64(16)), "float32"), B: 
T.Buffer((T.int64(16), T.int64(16)), "float32"), T_add: T.Buffer((T.int64(16), 
T.int64(16)), "float32")):
   T.func_attr({"tir.noalias": T.bool(True)})
   # with T.block("root"):
   for ax0, ax1 in T.grid(T.int64(16), T.int64(16)):
   with T.block("T_add"):
   v_ax0, v_ax1 = T.axis.remap("SS", [ax0, ax1])
   T.reads(C[v_ax0, v_ax1], B[v_ax0, v_ax1])
   T.writes(T_add[v_ax0, v_ax1])
   T_add[v_ax0, v_ax1] = C[v_ax0, v_ax1] + B[v_ax0, v_ax1]
   
   @T.prim_func(private=True)
   def multiply(A: T.Buffer((T.int64(16), T.int64(16)), "float32"), 
T_multiply: T.Buffer((T.int64(16), T.int64(16)), "float32")):
   T.func_attr({"tir.noalias": T.bool(True)})
   # with T.block("root"):
   for ax0, ax1 in T.grid(T.int64(16), T.int64(16)):
   with T.block("T_multiply"):
   v_ax0, v_ax1 = T.axis.remap("SS", [ax0, ax1])
   T.reads(A[v_ax0, v_ax1])
   T.writes(T_multiply[v_ax0, v_ax1])
   T_multiply[v_ax0, v_ax1] = A[v_ax0, v_ax1] * T.float32(2)
   
   @R.function
   def transform_params(A: R.Tensor((16, 16), dtype="float32"), B: 
R.Tensor((16, 16), dtype="float32")) -> R.Tuple(R.Tensor((16, 16), 
dtype="float32"), R.Tensor((16, 16), dtype="float32"), R.Prim(value=42), 
R.Tensor((), dtype="float16")):
   cls = Module
   C = R.call_tir(cls.multiply, (A,), out_sinfo=R.Tensor((16, 16), 
dtype="float32"))
   D = R.call_tir(cls.add1, (C, B), out_sinfo=R.Tensor((16, 16), 
dtype="float32"))
   return (C, D, R.prim_value(42), R.const(17.5, "float16"))
   
   @R.function
   def main(para0: R.Tensor((16, 16), dtype="float32")) -> 
R.Tuple(R.Tensor((16, 16), dtype="float32"), R.Tensor((16, 16), 
dtype="float32"), R.Prim(value=42), R.Tensor((), dtype="float16")):
   cls = Module
   with R.dataflow():
   res: 

Re: [PR] [Unity][Frontend] Add Sqrt Op [tvm]

2024-08-02 Thread via GitHub


Hzfengsy commented on PR #17228:
URL: https://github.com/apache/tvm/pull/17228#issuecomment-2264768193

   @tvm-bot rerun


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax] FuseTransposeMatmul Pass [tvm]

2024-08-02 Thread via GitHub


Hzfengsy opened a new pull request, #17234:
URL: https://github.com/apache/tvm/pull/17234

   Introduce a new pass to fuse transpose and matmul, which specially for 
`Linear` ops in PyTorch and NNModule. Note that this pass is migrated from 
MLC-LLM.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Apply DefaultGPUSchedule() in default build pipeline [tvm]

2024-08-01 Thread via GitHub


MadFunMaker commented on PR #17108:
URL: https://github.com/apache/tvm/pull/17108#issuecomment-2263937043

   LGTM - I think it's clear usability gain without side effects and 
significant extra burden on build pipeline.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Check failed: (it != this->var_arg_map_.end()) is false: Var is not defined [tvm]

2024-08-01 Thread via GitHub


Lunderberg commented on issue #17231:
URL: https://github.com/apache/tvm/issues/17231#issuecomment-2263470487

   No problem, and thank you for the high-quality bug reports!  Running into 
any of these failure modes in larger use cases can be very difficult to debug.  
My personal rule of thumb is that every `IRModule` should either be caught as 
ill-formed, or should compile without issue.  The errors you've been uncovering 
show that that clearly isn't the current case, but fixing them helps move 
toward that ideal.
   
   (With some exceptions for uncatchable issues, such as incorrect arguments 
used for external functions.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Implement R.ensure_zero_offset and update memory planning for R.view [tvm]

2024-08-01 Thread via GitHub


Lunderberg commented on PR #17145:
URL: https://github.com/apache/tvm/pull/17145#issuecomment-2263451826

   @vinx13 I took a look at the current CI failures, and it looks like it 
pretty close to passing.  If you'd like, applying the diff below should resolve 
the last 4 failing tests in CI.
   
   
[pr_17145_diff.txt](https://github.com/user-attachments/files/16459245/pr_17145_diff.txt)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Check failed: (it != this->var_arg_map_.end()) is false: Var is not defined [tvm]

2024-08-01 Thread via GitHub


Cookiee235 commented on issue #17231:
URL: https://github.com/apache/tvm/issues/17231#issuecomment-2263446790

   @Lunderberg The test case can run correctly now under the given PR (#17232). 
Thanks for your efforts!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Runtime Patch] Add AbortSignal to fetchWithCache in ArtifactCacheTemplate interface [tvm]

2024-08-01 Thread via GitHub


Neet-Nestor opened a new pull request, #17233:
URL: https://github.com/apache/tvm/pull/17233

   This is a patch for a missing change in 
https://github.com/apache/tvm/pull/17227, where we updated the function 
parameters of the `fetchWithCache` function implementations but not the 
interface.
   
   This tiny patch updated the function signature in the interface as well to 
make it consistent with the implementation and also to expose it to clients.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime Patch] Add AbortSignal to fetchWithCache in ArtifactCacheTemplate interface [tvm]

2024-08-01 Thread via GitHub


Neet-Nestor commented on PR #17233:
URL: https://github.com/apache/tvm/pull/17233#issuecomment-2263429942

   @tqchen Sorry for this miss :( 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime] Allow aborting fetchWithCache through AbortSignal [tvm]

2024-08-01 Thread via GitHub


tqchen merged PR #17227:
URL: https://github.com/apache/tvm/pull/17227


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Check failed: (it != this->var_arg_map_.end()) is false: Var is not defined [tvm]

2024-08-01 Thread via GitHub


Lunderberg commented on issue #17231:
URL: https://github.com/apache/tvm/issues/17231#issuecomment-2263378617

   Looks like a bug in the `LiftTransformParams` implementation, that it only 
determines the variables required at runtime based on the contents of 
`VarBinding`, and not from the output of a `Function`.  Should be fixed in 
https://github.com/apache/tvm/pull/17232.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax] Lifted parameters bindings may also be function output [tvm]

2024-08-01 Thread via GitHub


Lunderberg opened a new pull request, #17232:
URL: https://github.com/apache/tvm/pull/17232

   Prior to this commit, the `relax.transform.LiftTransformParams` pass 
inspected the expression in each `relax::Binding` for variables that were 
required at runtime, but did not inspect the function's output. As a result, 
any value that could be computed at compile-time, and was either the function 
output or used in the function's output tuple, would be undefined in the 
inference function.
   
   This commit updates `LiftTransformParams` to collect variables from both the 
bound value of `relax::Binding`, and the function's output.
   
   While this error only impacted the `shared_transform=False` branch of 
`LiftTransformParams`, this commit also adds regression tests the 
`shared_transform=True` use case of `LiftTransformParams`.
   
   Closes https://github.com/apache/tvm/issues/17231


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Expect a not null value of IRModule [tvm]

2024-08-01 Thread via GitHub


Cookiee235 closed issue #17230: [Bug] [Relax]  Expect a not null value of 
IRModule
URL: https://github.com/apache/tvm/issues/17230


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Expect a not null value of IRModule [tvm]

2024-08-01 Thread via GitHub


Lunderberg commented on issue #17230:
URL: https://github.com/apache/tvm/issues/17230#issuecomment-2263246222

   Looking closer, the change that I was thinking of already occurs 
([here](https://github.com/apache/tvm/blob/main/include/tvm/ir/module.h#L454)), 
and `IRModule` is non-nullable.  So, it already indicates (1) that the function 
signature is `(0: transform.Pass, 1: IRModule)`, (2) that there is a problem 
with `converting argument 1`, and (3) that the problem is that it was `Expect a 
not null value of IRModule`.
   
   So, the error message looks like it already gives all of the information it 
could.  Maybe it could be improved by returning a `TypeError` instead of a 
`TVMError`, but that would be about the extent of it.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Expect a not null value of IRModule [tvm]

2024-08-01 Thread via GitHub


Cookiee235 commented on issue #17230:
URL: https://github.com/apache/tvm/issues/17230#issuecomment-2263187197

   @Lunderberg Haha, I'm so sorry for misusing the API. Since I did not 
understand the error message, I submitted it directly.
   
   Indeed,  improving the error message will be better. If you are willing to 
improve the error message, I'll leave this issue open. Otherwise, you can feel 
free to close this issue directly. Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-08-01 Thread via GitHub


yizhihenpidehou closed issue #17209: fatal error: string_view: No such file or 
directory
URL: https://github.com/apache/tvm/issues/17209


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-08-01 Thread via GitHub


yizhihenpidehou commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2263150950

   > Should be doable with `update-alternatives` ([SO 
link](https://askubuntu.com/a/26518)).
   
   Thanks for your help!!! Now I can successfully compile and install tvm on my 
machine


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Expect a not null value of IRModule [tvm]

2024-08-01 Thread via GitHub


Lunderberg commented on issue #17230:
URL: https://github.com/apache/tvm/issues/17230#issuecomment-2263142421

   > `mod = mod.show()`
   
   This line looks like a bug in the test case.  Displaying a module is not a 
transform, and so `mod.show()` returns `None`.  This then correctly errors out 
by detecting the nullptr in the `IRModule`.
   
   (The error message may be improved by making `IRModule` be a non-nullable 
type, but a lot of the earlier portions of TVM rely on `ObjectRef` sub-types 
being nullable, so it would be a non-trivial change.  Would need to first find 
all instances `IRModule` that should be replaced by `Optional`.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] Check failed: (it != this->var_arg_map_.end()) is false: Var is not defined [tvm]

2024-08-01 Thread via GitHub


Cookiee235 opened a new issue, #17231:
URL: https://github.com/apache/tvm/issues/17231

   ### Actual behavior
   
   ```
   Traceback (most recent call last):
 File "/share_container/optfuzz/res/bugs/simple/res_undefined.py", line 49, 
in 
   compiled_after = compile_mod(relax.transform.LiftTransformParams()(mod))
^^^
 File "/share_container/optfuzz/res/bugs/simple/res_undefined.py", line 41, 
in compile_mod
   ex = relax.build(mod, target="llvm")
^^^
 File "/software/tvm-lunder/python/tvm/relax/vm_build.py", line 340, in 
build
   mod = _vmcodegen(builder, mod, exec_mode)
 ^^^
 File "/software/tvm-lunder/python/tvm/relax/vm_build.py", line 176, in 
_vmcodegen
   return _ffi_api.VMCodeGen(builder, mod)  # type:ignore
  
 File "/software/tvm-lunder/python/tvm/_ffi/_ctypes/packed_func.py", line 
240, in __call__
   raise_last_ffi_error()
 File "/software/tvm-lunder/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
   tvm.error.InternalError: Traceback (most recent call last):
 7: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::IRModule 
(*)(tvm::relax::ExecBuilder, tvm::IRModule), std::__cxx11::basic_string, std::allocator >)::{lambda(tvm::runtime::TVMArgs 
const&, tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj 
const*, tvm::runtime::TVMArgs, tvm::runtime::TVMRetValue*)
 6: tvm::relax::relax_vm::VMCodeGen(tvm::relax::ExecBuilder, tvm::IRModule)
 5: tvm::relax::relax_vm::CodeGenVM::Run(tvm::relax::ExecBuilder, 
tvm::IRModule)
 4: tvm::relax::relax_vm::CodeGenVM::Codegen(tvm::relax::Function const&)
 3: tvm::relax::ExprFunctor::VisitExpr(tvm::RelayExpr const&)
 2: tvm::relax::relax_vm::CodeGenVM::VisitExpr_(tvm::relax::SeqExprNode 
const*)
 1: tvm::relax::ExprFunctor::VisitExpr(tvm::RelayExpr const&)
 0: tvm::relax::relax_vm::CodeGenVM::VisitExpr_(tvm::relax::VarNode const*)
 File "/software/tvm-lunder/src/relax/backend/vm/codegen_vm.cc", line 232
   InternalError: Check failed: (it != this->var_arg_map_.end()) is false: Var 
w1_t is not defined
   ```
   
   
   ### Steps to reproduce
   ```
   import tvm
   from tvm import relax
   import numpy as np
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @T.prim_func(private=True)
   def transpose(w1: T.Buffer((T.int64(256), T.int64(256)), "float32"), 
T_transpose: T.Buffer((T.int64(256), T.int64(256)), "float32")):
   T.func_attr({"tir.noalias": T.bool(True)})
   # with T.block("root"):
   for ax0, ax1 in T.grid(T.int64(256), T.int64(256)):
   with T.block("T_transpose"):
   v_ax0, v_ax1 = T.axis.remap("SS", [ax0, ax1])
   T.reads(w1[v_ax1, v_ax0])
   T.writes(T_transpose[v_ax0, v_ax1])
   T_transpose[v_ax0, v_ax1] = w1[v_ax1, v_ax0]
   
   @R.function(private=False)
   def main(x: R.Tensor((256, 256), dtype="float32"), w1: R.Tensor((256, 
256), dtype="float32")) -> R.Tensor((256, 256), dtype="float32"):
   R.func_attr({"num_input": 1})
   cls = Module
   with R.dataflow():
   w1_t = R.call_tir(cls.transpose, (w1,), out_sinfo=R.Tensor((256, 
256), dtype="float32"))
   R.output(w1_t)
   return w1_t
   
   mod = Module
   mod.show()
   mod = tvm.relax.transform.LegalizeOps()(mod)
   
   
   input_0 = tvm.nd.array(10 * np.random.random([256, 256]).astype('float32'))
   input_1 = tvm.nd.array(10 * np.random.random([256, 256]).astype('float32'))
   
   def compile_mod(mod):
   mod = relax.transform.FuseTIR()(mod)
   mod = relax.transform.LambdaLift()(mod)
   ex = relax.build(mod, target="llvm")
   vm = relax.VirtualMachine(ex, tvm.cpu())
   return vm
   
   
   compiled_before = compile_mod(mod)
   before_outputs = compiled_before["main"](input_0, input_1)
   
   compiled_after = compile_mod(relax.transform.LiftTransformParams()(mod))
   transformed_weights = compiled_after["main_transform_params"]([input_1])
   after_outputs = compiled_after["main"](input_0, *transformed_weights)
   
   ```
   
   cc @Lunderberg @junrushao 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] Expect a not null value of IRModule [tvm]

2024-08-01 Thread via GitHub


Cookiee235 opened a new issue, #17230:
URL: https://github.com/apache/tvm/issues/17230

   ### Actual behavior
   
   ```
   Traceback (most recent call last):
 File "test.py", line 22, in 
   mod = tvm.relax.transform.LegalizeOps()(mod)
 ^^
 File "/software/tvm-lunder/python/tvm/ir/transform.py", line 238, in 
__call__
   return _ffi_transform_api.RunPass(self, mod)
  ^
 File "/software/tvm-lunder/python/tvm/_ffi/_ctypes/packed_func.py", line 
240, in __call__
   raise_last_ffi_error()
 File "/software/tvm-lunder/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 1: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string, 
std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
std::__cxx11::basic_string, std::allocator 
>, tvm::runtime::TVMRetValue)
 0: tvm::runtime::TVMMovableArgValueWithContext_::operator 
tvm::IRModule() const
 2: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string, 
std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
std::__cxx11::basic_string, std::allocator 
>, tvm::runtime::TVMRetValue)
 1: tvm::runtime::TVMMovableArgValueWithContext_::operator 
tvm::IRModule() const
 0: tvm::IRModule tvm::runtime::TVMPODValue_::AsObjectRef() 
const
 File "/software/tvm-lunder/include/tvm/runtime/packed_func.h", line 785
   TVMError: In function transform.RunPass(0: transform.Pass, 1: IRModule) -> 
IRModule: error while converting argument 1: [21:44:09] 
/software/tvm-lunder/include/tvm/runtime/packed_func.h:2022: Check failed: 
(TObjectRef::_type_is_nullable) is false: Expect a not null value of IRModule
   ```
   
   
   
   ### Steps to reproduce
   
   ```
   import tvm
   from tvm import relax
   
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @R.function
   def main(v1_0: R.Tensor((1,), dtype="float16"), v6_0: R.Tensor((1, 40, 
25, 1), dtype="float16")) -> R.Tensor((), dtype="float16"):
   with R.dataflow():
   lv: R.Tensor((), dtype="float16") = R.sum(v6_0, axis=None, 
keepdims=False)
   R.output(lv)
   return lv
   
   
   mod = Module
   mod = tvm.relax.transform.DeadCodeElimination()(mod)
   mod = mod.show()
   mod = tvm.relax.transform.LegalizeOps()(mod)
   ```
   
   cc @Lunderberg @junrushao 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Bugfix][Cutlass] fix cutlass instantiate attention template bugs [tvm]

2024-08-01 Thread via GitHub


senlyu163 commented on PR #17229:
URL: https://github.com/apache/tvm/pull/17229#issuecomment-2263084341

   cc @masahi @sunggg @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Bugfix][Cutlass] fix cutlass instantiate attention template bugs [tvm]

2024-08-01 Thread via GitHub


senlyu163 opened a new pull request, #17229:
URL: https://github.com/apache/tvm/pull/17229

   Fixed a bug in cutlass byoc during instantiation of attention template, 
which would cause CUDA ERROR ILLEGAL ADDRESS error.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Fix segfault in rewrite_bindings for MatchCast node [tvm]

2024-08-01 Thread via GitHub


Lunderberg merged PR #17226:
URL: https://github.com/apache/tvm/pull/17226


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-08-01 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2262290364

   @Lunderberg 
   I resolved the issue by adding multiple dispatch to the mlcllm compile 
pipeline. The problem consistently occurred with the first dispatch, but 
keeping just one dispatch, any dispatch,  it worked !


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] Error opening FastRPC channel [tvm]

2024-07-31 Thread via GitHub


abhikran-quic commented on issue #17195:
URL: https://github.com/apache/tvm/issues/17195#issuecomment-2261940916

   Are you able to run calculator example with `testsig.so`  on development 
board ? This will help in identifying if FastRPC is working properly. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Unity][Frontend] Add Sqrt Op [tvm]

2024-07-31 Thread via GitHub


tlopex opened a new pull request, #17228:
URL: https://github.com/apache/tvm/pull/17228

   This PR adds a new ops to the frontend: Sqrt.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] Error opening FastRPC channel [tvm]

2024-07-31 Thread via GitHub


chayliu-ecarx commented on issue #17195:
URL: https://github.com/apache/tvm/issues/17195#issuecomment-2261837827

   The `testsig.so` did't solve this problem.

   Switched to using a sa8155 development board for testing, however there is 
another issue:
   
   Run as :
   ```
   export LD_LIBRARY_PATH=/data/local/tmp/tvm
   export ADSP_LIBRARY_PATH="/data/local/tmp/tvm/adsp;" 
   
   ./launcher_android --in_config mobilenetv2-7_input.json --out_config 
output.json
   ```
   after that it seems like the program was blocked,not moving forwards.
   
   the log is:
   ```
   01-01 01:23:00.400  8891  8891 I launcher_android: 
vendor/qcom/proprietary/commonsys-intf/adsprpc/src/rpcmem_android.c:158: 
rpcmem_init_internal: opened ION device fd 3, configured heap IDs: system 
(0x200), contig (0x10), secure (0x400), secure flags (0x8008)
   01-01 01:23:00.400  8891  8891 I launcher_android: 
vendor/qcom/proprietary/commonsys-intf/adsprpc/src/fastrpc_apps_user.c:2832: 
fastrpc_apps_user_init done
   01-01 01:23:00.402  8891  8891 I launcher_android: 
vendor/qcom/proprietary/commonsys-intf/adsprpc/src/fastrpc_config.c:136: 
Reading configuration file: launcher_android.debugconfig
   01-01 01:23:00.402  8891  8891 I launcher_android: 
vendor/qcom/proprietary/commonsys-intf/adsprpc/src/fastrpc_config.c:156: Read 
fastrpc config file launcher_android.debugconfig found at 
/data/local/tmp/tvm/adsp
   ```
   A few minutes later,it was still like this  and without any outputs.
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2261794051

   @Lunderberg 
   1、When I apply dispatch in mlcllm, this issue occurs,and When the 
pattern-matching is applied, is the call to the fused kernel generated as  
“R.call_dps_packed("fused_relax_nn_conv2d_cudnn", args)  of the class Module“ , 
   
   2、When  in the TVM test case, it works fine, log :
   **before pattern** :
   
   from tvm.script import ir as I
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @R.function
   def main(data: R.Tensor((16, 32, 32, 16), dtype="float16"), weight: 
R.Tensor((32, 3, 3, 16), dtype="float16")) -> R.Tensor((16, 32, 32, 32), 
dtype="float16"):
   with R.dataflow():
   lv: R.Tensor((16, 32, 32, 32), dtype="float16") = 
R.nn.conv2d(data, weight, strides=[1, 1], padding=[1, 1, 1, 1], dilation=[1, 
1], groups=1, data_layout="NHWC", kernel_layout="OHWI", out_layout="NHWC", 
out_dtype="float16")
   R.output(lv)
   return lv
   
   **after pattern:**
   
   from tvm.script import ir as I
   from tvm.script import relax as R
   @I.ir_module
   class Module:
   @R.function
   def fused_relax_nn_conv2d_cudnn(data: R.Tensor((16, 32, 32, 16), 
dtype="float16"), weight: R.Tensor((32, 3, 3, 16), dtype="float16")) -> 
R.Tensor((16, 32, 32, 32), dtype="float16"):
   R.func_attr({"Codegen": "cudnn"})
   # from tvm.script import relax as R
   
   @R.function
   def local_func(data_1: R.Tensor((16, 32, 32, 16), 
dtype="float16"), weight_1: R.Tensor((32, 3, 3, 16), dtype="float16")) -> 
R.Tensor((16, 32, 32, 32), dtype="float16"):
   R.func_attr({"Composite": "cudnn.conv2d.nhwc_ohwi"})
   with R.dataflow():
   gv: R.Tensor((16, 32, 32, 32), dtype="float16") = 
R.nn.conv2d(data_1, weight_1, strides=[1, 1], padding=[1, 1, 1, 1], 
dilation=[1, 1], groups=1, data_layout="NHWC", kernel_layout="OHWI", 
out_layout="NHWC", out_dtype="float16")
   R.output(gv)
   return gv
   
   output: R.Tensor((16, 32, 32, 32), dtype="float16") = 
local_func(data, weight)
   return output
   
   @R.function
   def main(data: R.Tensor((16, 32, 32, 16), dtype="float16"), 
weight: R.Tensor((32, 3, 3, 16), dtype="float16")) -> R.Tensor((16, 32, 32, 
32), dtype="float16"):
   cls = Module
   with R.dataflow():
   gv: R.Tensor((16, 32, 32, 32), dtype="float16") = 
cls.fused_relax_nn_conv2d_cudnn(data, weight)
   R.output(gv)
   return gv  
   
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Runtime] Allow aborting fetchWithCache through AbortSignal [tvm]

2024-07-31 Thread via GitHub


Neet-Nestor commented on PR #17227:
URL: https://github.com/apache/tvm/pull/17227#issuecomment-2261771822

   cc. @tqchen 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Runtime] Allow aborting fetchWithCache through AbortSignal [tvm]

2024-07-31 Thread via GitHub


Neet-Nestor opened a new pull request, #17227:
URL: https://github.com/apache/tvm/pull/17227

   This is a follow-up for a previous change 
https://github.com/apache/tvm/pull/17208.
   
   This Pull Request updates function `fetchWithCache()` in TVM runtime class 
to accept an optional parameter `signal: AbortSignal`, so that users could use 
`AbortController` to abort the fetch process if needed.
   
   https://developer.mozilla.org/en-US/docs/Web/API/AbortController
   
   Related issues:
   https://github.com/mlc-ai/web-llm/issues/484
   https://github.com/mlc-ai/web-llm/issues/499


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Replacing unary ops with LookUpTable and Take op to improve performance [tvm]

2024-07-31 Thread via GitHub


jverma-quic commented on code in PR #17214:
URL: https://github.com/apache/tvm/pull/17214#discussion_r1699130275


##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint
+
+ops = ["tanh", "sqrt", "rsqrt", "exp", "erf", "sigmoid", "hardswish", 
"log", "abs"]
+for op in ops:
+if is_op(op, call_node):
+return True
+return False
+
+
+@relax.expr_functor.mutator
+class Tanh2TakeReplace(tvm.relax.PyExprMutator):
+def __init__(self, mod: tvm.IRModule) -> None:
+super().__init__(mod)
+self.mod_ = mod
+
+def transform(self) -> tvm.IRModule:
+# Iterate over all the nodes to check for the node replaceable
+for global_var, func in self.mod_.functions.items():
+# Skip non-relax functions
+if not isinstance(func, relax.Function):
+continue
+updated_func = self.visit_expr(func)
+self.builder_.normalize(updated_func)
+self.builder_.update_func(global_var, updated_func)
+# At the end of the transformation we return the updated IRModule from 
the BlockBuilder.
+return self.builder_.get()
+
+def visit_call_(self, call_node: relax.Call) -> relax.Call:
+if call_node.args[1][0].struct_info.dtype == "uint8":

Review Comment:
   > Should we verify whether the call_node is a `relax.call_tir` op before 
accessing the args?
   
   @quic-sanirudh: wouldn't it be guaranteed since we're only visiting the call 
nodes? 



##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint

Review Comment:
   I agree with you that relying on the global_var is not the best way to 
identify the operators for this transformation. However, I don't really think 
that operator_name will be much better. The problem here is that we lower the 
graph to Relay first and then during translation to Relax, the operator 
knowledge is lost. @Lunderberg's suggestion would have worked very well if we 
could have imported the graph directly to Relax and then before legalizing it, 
we could have replaced R.tanh with R.take(..).



-- 
This is an automated message from the Apache Git Service.
To respond to the 

Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2261335693

   > Having such explicit argument makes the "intent" clear, with the explicit 
sinfo, we can write down the semantics in a clear fashion
   
   Good point on the semantics.  This change would add an additional step to 
the user-facing semantics of `R.call_tir`. 
   
   ```python
   def call_tir(func, args, out_sinfo):
   if out_sinfo is None:
   out_sinfo = infer_out_sinfo(func, args) # may throw
   
   out = alloc_outputs(out_sinfo)
   func(*args, unpack_outputs(out))
   return out
   ```
   
   I suppose that I'm getting stuck on is the "intent" part.  While there are 
exceptions, in the majority of cases, there's one and only one correct value 
for `out_sinfo`.  Since the user doesn't have any choice in it, we can't infer 
any intention from the user about it.  On the other hand, if the user has the 
option of omitting the `out_sinfo`, then we could distinguish between the 
intent of "use whichever output is valid" (e.g. `R.call_tir(unary_abs, [x])`) 
and "verify and use the output I expect" (e.g. `R.call_tir(unary_abs, [x], 
R.Tensor([16],'float16'))`).
   
   > In this particular case, having good well form check about consistency 
would help a lot toward that direction
   
   Agreed.  I think for now, let's put this PR on hold, and I'll update the 
well-formed checker to verify consistent between the `R.call_tir` callee and 
the input/output arguments.  (Since that's a change that we both agree on, and 
covers many of the same error modes.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Replacing unary ops with LookUpTable and Take op to improve performance [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17214:
URL: https://github.com/apache/tvm/pull/17214#discussion_r1699006346


##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint
+
+ops = ["tanh", "sqrt", "rsqrt", "exp", "erf", "sigmoid", "hardswish", 
"log", "abs"]
+for op in ops:
+if is_op(op, call_node):
+return True
+return False
+
+
+@relax.expr_functor.mutator
+class Tanh2TakeReplace(tvm.relax.PyExprMutator):
+def __init__(self, mod: tvm.IRModule) -> None:
+super().__init__(mod)
+self.mod_ = mod
+
+def transform(self) -> tvm.IRModule:
+# Iterate over all the nodes to check for the node replaceable
+for global_var, func in self.mod_.functions.items():
+# Skip non-relax functions
+if not isinstance(func, relax.Function):
+continue
+updated_func = self.visit_expr(func)
+self.builder_.normalize(updated_func)
+self.builder_.update_func(global_var, updated_func)
+# At the end of the transformation we return the updated IRModule from 
the BlockBuilder.
+return self.builder_.get()
+
+def visit_call_(self, call_node: relax.Call) -> relax.Call:
+if call_node.args[1][0].struct_info.dtype == "uint8":
+if op_replace(call_node):
+inp, inp_scale, inp_zp, out_scale, out_zp = [x for x in 
call_node.args[1]]
+# LUT node creation
+LUT = hexagon_unary_ops.LUT_generation(

Review Comment:
   When is this pass intended to be applied?  If it can be moved to before 
`LegalizeOps`, then that would make it easier to define the lookup table as a 
Relax expression (using `R.arange(0,256,'uint8')` as all possible quantized 
values, passing it through the relax operations, then finishing with 
`R.take(computed_table, inp)`).  This would be simplified by the 
`FoldConstantPass` to the same `R.take(R.const(...), inp)` which is generated 
here, but wouldn't require explicit handling of each unary operation.
   
   That would also allow the pattern-matching to be done based on the Relax 
operations themselves, rather than their lowered names.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Replacing unary ops with LookUpTable and Take op to improve performance [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17214:
URL: https://github.com/apache/tvm/pull/17214#discussion_r1699006346


##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint
+
+ops = ["tanh", "sqrt", "rsqrt", "exp", "erf", "sigmoid", "hardswish", 
"log", "abs"]
+for op in ops:
+if is_op(op, call_node):
+return True
+return False
+
+
+@relax.expr_functor.mutator
+class Tanh2TakeReplace(tvm.relax.PyExprMutator):
+def __init__(self, mod: tvm.IRModule) -> None:
+super().__init__(mod)
+self.mod_ = mod
+
+def transform(self) -> tvm.IRModule:
+# Iterate over all the nodes to check for the node replaceable
+for global_var, func in self.mod_.functions.items():
+# Skip non-relax functions
+if not isinstance(func, relax.Function):
+continue
+updated_func = self.visit_expr(func)
+self.builder_.normalize(updated_func)
+self.builder_.update_func(global_var, updated_func)
+# At the end of the transformation we return the updated IRModule from 
the BlockBuilder.
+return self.builder_.get()
+
+def visit_call_(self, call_node: relax.Call) -> relax.Call:
+if call_node.args[1][0].struct_info.dtype == "uint8":
+if op_replace(call_node):
+inp, inp_scale, inp_zp, out_scale, out_zp = [x for x in 
call_node.args[1]]
+# LUT node creation
+LUT = hexagon_unary_ops.LUT_generation(

Review Comment:
   When is this pass intended to be applied?  If it can be moved to before 
`LegalizeOps`, then that would make it easier to define the lookup table as a 
Relax expression, which would then be handled by the `FoldConstant` pass.
   
   That would also allow the pattern-matching to be done based on the Relax 
operations themselves, rather than their lowered names.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Replacing unary ops with LookUpTable and Take op to improve performance [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17214:
URL: https://github.com/apache/tvm/pull/17214#discussion_r1699006346


##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint
+
+ops = ["tanh", "sqrt", "rsqrt", "exp", "erf", "sigmoid", "hardswish", 
"log", "abs"]
+for op in ops:
+if is_op(op, call_node):
+return True
+return False
+
+
+@relax.expr_functor.mutator
+class Tanh2TakeReplace(tvm.relax.PyExprMutator):
+def __init__(self, mod: tvm.IRModule) -> None:
+super().__init__(mod)
+self.mod_ = mod
+
+def transform(self) -> tvm.IRModule:
+# Iterate over all the nodes to check for the node replaceable
+for global_var, func in self.mod_.functions.items():
+# Skip non-relax functions
+if not isinstance(func, relax.Function):
+continue
+updated_func = self.visit_expr(func)
+self.builder_.normalize(updated_func)
+self.builder_.update_func(global_var, updated_func)
+# At the end of the transformation we return the updated IRModule from 
the BlockBuilder.
+return self.builder_.get()
+
+def visit_call_(self, call_node: relax.Call) -> relax.Call:
+if call_node.args[1][0].struct_info.dtype == "uint8":
+if op_replace(call_node):
+inp, inp_scale, inp_zp, out_scale, out_zp = [x for x in 
call_node.args[1]]
+# LUT node creation
+LUT = hexagon_unary_ops.LUT_generation(

Review Comment:
   When is this pass intended to be applied?  If it's moved to before 
`LegalizeOps`, then that would make it easier to define the lookup table as a 
Relax expression, which would then be handled by the `FoldConstant` pass.
   
   That would also allow the pattern-matching to be done based on the Relax 
operations themselves, rather than their lowered names.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Enhance Lower cross thread Pass [tvm]

2024-07-31 Thread via GitHub


tqchen commented on PR #17133:
URL: https://github.com/apache/tvm/pull/17133#issuecomment-2261255860

   @LeiWang1999 please fix the lint and test case, @wrongtest-intellif do you 
mind help review the PR


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Segmentation fault when using the MergeCompositeFunctions transform [tvm]

2024-07-31 Thread via GitHub


tqchen closed issue #17120: [Relax][Bug] Segmentation fault when using the 
MergeCompositeFunctions transform
URL: https://github.com/apache/tvm/issues/17120


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Handle presence of R.call_tir in MergeCompositeFunctions [tvm]

2024-07-31 Thread via GitHub


tqchen merged PR #17220:
URL: https://github.com/apache/tvm/pull/17220


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Segmentation fault when using the MergeCompositeFunctions transform [tvm]

2024-07-31 Thread via GitHub


tqchen closed issue #17120: [Relax][Bug] Segmentation fault when using the 
MergeCompositeFunctions transform
URL: https://github.com/apache/tvm/issues/17120


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


tqchen commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2261226877

   Thanks for pointing out the frontend case, I still think being explicit is 
helpful and aims for a consistency check with good error messages.  Having such 
explicit argument makes the "intent" clear. Say directly transpiles the code to 
python, with the explicit sinfo, we can write down the semantics explicitly
   
   ```python
   def call_tir(func, args, out_sinfo):
out = alloc_outputs(out_sinfo)
func(*args, unpack_outputs(out))
return out
   ```
   
   omitting the out_sinfo, while indeed ok in some cases, was not always 
derivable, and the intent was less clear. I know the arguments can go another 
way to reduce the amount users type. In this particular case, having good well 
form check about consistency would help a lot toward that direction
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax] Fix segfault in rewrite_bindings for MatchCast node [tvm]

2024-07-31 Thread via GitHub


Lunderberg opened a new pull request, #17226:
URL: https://github.com/apache/tvm/pull/17226

   Prior to this commit, the `tvm.relax.dpl.rewrite_bindings` utility would 
segfault if its input contained a `DataflowBlock` whose first binding was a 
`MatchCast`.
   
   The root cause is use of an unintialized `const VarNode* cur_user_;` when 
collecting the variable usage.  This variable is only initialized for 
`VarBinding` nodes, and may be used uninitialized if a `MatchCast` node is 
encountered before the first `VarBinding`.  This uninitialized value is later 
dereferenced during while pattern-matching, causing a segfault.
   
   This commit provides a default value of `nullptr` for 
`MatcherUseDefAnalysis::cur_user_`, preventing the segfault.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] CreateModulePass has an inconsistent para_format between BindSymbolicVars and other pass [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17225:
URL: https://github.com/apache/tvm/issues/17225#issuecomment-2260973635

   No problem, and always happy to help!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] CreateModulePass has an inconsistent para_format between BindSymbolicVars and other pass [tvm]

2024-07-31 Thread via GitHub


Cookiee235 commented on issue #17225:
URL: https://github.com/apache/tvm/issues/17225#issuecomment-2260963694

   @Lunderberg Thanks for your explanation. Sorry to report such a trivial 
issue. I closed it!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] CreateModulePass has an inconsistent para_format between BindSymbolicVars and other pass [tvm]

2024-07-31 Thread via GitHub


Cookiee235 closed issue #17225: [Bug] CreateModulePass has an inconsistent 
para_format between BindSymbolicVars and other pass
URL: https://github.com/apache/tvm/issues/17225


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] CreateModulePass has an inconsistent para_format between BindSymbolicVars and other pass [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17225:
URL: https://github.com/apache/tvm/issues/17225#issuecomment-2260948263

   The names used for each pass are purely for human-readability, and aren't 
required to have any specific format.
   
   By convention, passes that impacted Relay functions had no prefix, while 
passes that impacted TIR functions had a `"tir."` prefix.  Most of the Relax 
passes inherited the naming convention from Relay.  Unfortunately, this means 
that Relay and Relax transforms may have the same name (e.g. the name 
`"DeadCodeElimination"` is used both 
[here](https://github.com/apache/tvm/blob/main/src/relax/transform/dead_code_elimination.cc#L187)
 for Relax and 
[here](https://github.com/apache/tvm/blob/main/src/relay/transforms/dead_code.cc#L576)
 for Relay).
   
   Since user scripts may specify per-pass config options by name (e.g. `with 
PassContext(disabled_pass=["some_pass_name"])`), updating the names to all have 
unique prefixes would break backwards compatibility.  It may be worth it at 
some point to avoid duplication, and to have an explicit IR type as part of 
every transform, but hasn't been a priority.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2260928013

   > I think encouraging pass writers to explicitly think about the DPS pattern 
and always provide the return argument helps to reduce uncertainty here.
   
   While I think this would be an interesting point to discuss, I don't think 
it's relevant to this specific change.  This PR keeps the exact same 
`out_sinfo` in the C++ IR types, and still requires pass writers to explicitly 
provide the output info.  The `MakeCallTIR` function is not exposed to the 
back-end C++ API, only through the front-end Python API.
   
   This change is solely in the front-end, for cases where an `IRModule` is 
being hand-written.  I'd like to make that use-case less error-prone.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] BindSymbolicVars has a different name with other pass in the CreateModulePass [tvm]

2024-07-31 Thread via GitHub


Cookiee235 opened a new issue, #17225:
URL: https://github.com/apache/tvm/issues/17225

   Hi all, I accidentally discovered that only the pass  `BindSymbolicVars ` 
has the prefix "relax.XX" when calling the function `CreateModulePass`, while 
all of the others have not. Is this a feature or a bug? 
   
   
   
![image](https://github.com/user-attachments/assets/8878c75a-e4c3-4565-8b5c-004e609d72de)
   
   
   cc @Lunderberg


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


tqchen commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2260887460

   I think this is mainly a design consideration here on what do we view the 
intended use of CreateCallTIR, in terms of different expectations we have on 
caller of the function. I can see some merits on auto deduction or call for 
explicitness
   
   Given `call_tir` is lower level, having "less automation" here during pass 
and have explicitly checking would ensure correctness while indeed asking pass 
writers to do a bit more. It is like explicitly annotating types when writing 
c++ code versus writing `auto`. I think encouraging pass writers to explicitly 
think about the DPS pattern and always provide the return argument helps to 
reduce uncertainty here. While I can indeed see some merits of automated 
decusion, given it is not always possible, I still prefer we have the 
explicitness and provide good amount of consistency checking
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Variable was used before its definition [tvm]

2024-07-31 Thread via GitHub


Cookiee235 commented on issue #17222:
URL: https://github.com/apache/tvm/issues/17222#issuecomment-2260857713

   @Lunderberg Thanks! The test can run well with your patch.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260839459

   Should be doable with `update-alternatives` ([SO 
link](https://askubuntu.com/a/26518)).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


yizhihenpidehou commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260826202

   > Everything looks reasonable for the g++ configuration.
   > 
   > It looks like cmake is calling the `/usr/bin/c++` executable, which is 
usually a symlink to a specific compiler version. Can you run `realpath 
/usr/bin/c++` to see which compiler it ends up running?
   
   Thanks !!!
   I see /usr/bin/g++-5, now could to tell me how to relink c++ to g++-10.2.0


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2260817965

   @jpf888 This sounds like a similar bug, but would depend on how the dispatch 
is implemented.  When the pattern-matching is applied, is the call to the fused 
kernel generated as `module.fused_function(args)`, or as 
`R.call_extern("fused_function", args)`?  The fix applied in #17202 would only 
apply to calls that are known to be within the same IRModule (the 
`module.fused_function(args)` version), and not to calls that may be defined 
outside of the IRModule (the `R.call_extern` version).


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [CI] Reduce logging level when checking if docker image exists [tvm]

2024-07-31 Thread via GitHub


Lunderberg merged PR #17221:
URL: https://github.com/apache/tvm/pull/17221


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260809090

   Everything looks reasonable for the g++ configuration.
   
   It looks like cmake is calling the `/usr/bin/c++` executable, which is 
usually a symlink to a specific compiler version.  Can you run `realpath 
/usr/bin/c++` to see which compiler it ends up running?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Argument type mismatch: expected R.Tensor, given R.Tuple [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17223:
URL: https://github.com/apache/tvm/issues/17223#issuecomment-2260795271

   I can run the test case and reproduce the error, but the error message seems 
correct for the test case.  The first argument to `Module.multiply_by_two` is a 
tensor, but the first item of `R.call_tir`'s argument tuple is a tuple.  This 
could be caught earlier by the well-formed checker, when updated to validate 
the `R.call_tir` arguments.
   
   (As a side-note, replacing `(args,)` with `args` would have the correct 
struct info, but wouldn't be an in-line relax Tuple as required by 
`R.call_tir`.  See the discussion in https://github.com/apache/tvm/pull/15916 
for more detail on the requirement for an in-line tuple.)


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] Variable was used before its definition [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17222:
URL: https://github.com/apache/tvm/issues/17222#issuecomment-2260766884

   Looks like this is a bug in the dependency collection for 
recursively-defined functions.  Should be resolved with 
https://github.com/apache/tvm/pull/17224.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax][Analysis] Handle recursive functions in CollectVarUsage [tvm]

2024-07-31 Thread via GitHub


Lunderberg opened a new pull request, #17224:
URL: https://github.com/apache/tvm/pull/17224

   Prior to this commit, the `relax::analysis::CollectVarUsage` utility treated 
a local function definition as in-scope after visiting the body of the local 
function.  As a result, recursive calls from a local function were incorrectly 
identified as calls to an undefined variable.
   
   This commit updates the `CollectVarUsage` to treat a local function 
definition as in-scope when inspecting the function body.  This change is 
similar to the change made for structural equality in 
https://github.com/apache/tvm/pull/16756.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2260732685

   @Lunderberg @Cookiee235
   hi, When I try to use cudnn dispatch, I encounter the same problem as you 
during runtime. After replacing with the code changes you submitted, I still 
have the same problem. Could you please give me some suggestions?
   
   1、Actual behavior:
   `vm::runtime::Optional const&, 
bool&>(tvm::runtime::String&, tvm::runtime::String&, 
picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&) at 
mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:196 5: 
tvm::runtime::ObjectPtr 
tvm::runtime::ObjAllocatorBase::make_object const&, 
bool&>(tvm::runtime::String&, tvm::runtime::String&, 
picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&) at 
mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:72 4: 
mlc::llm::serve::ModelImpl* 
tvm::runtime::SimpleObjAllocator::Handler::New 
const&, bool&>(tvm::runtime::SimpleObjAllocator*, tvm::runtime::String&, 
tvm::runtime::String&, picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&) at 
mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:122 3: 
mlc::llm::serve::ModelImpl::ModelImpl(tvm::runtime::String, 
tvm::runtime::String, picojson::object_with_ordered_keys, DLDevice, 
tvm::runtime::Optional const&, bool) at 
/mlc-llm/cpp/serve/model.cc:66 2: 
mlc::llm::serve::FunctionTable::Init(tvm::runtime::String, DLDevice, 
picojson::object_with_ordered_keys, 
tvm::runtime::Optional) at 
/mlc-llm/cpp/serve/function_table.cc:133 1: 
tvm::runtime::relax_vm::VirtualMachineImpl::_Init(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*) 0: 
tvm::runtime::relax_vm::VirtualMachineImpl::InitFuncPool() File 
"/mlc-llm/3rdparty/tvm/src/runtime/relax_vm/vm.cc", line 70
 7 InternalError:`
**Check failed: (func.defined()) is false: Error: Cannot find PackedFunc 
fused_relax_nn_conv2d_cudnn in either Relax VM kernel library, or in TVM 
runtime PackedFunc registry, or in global Relax functions of the VM executable**
   
   2、But when I run the test case of TVM's cudnn conv2d separately, it works 
normally, and I can find 'fused_relax_nn_conv2d_cudnn'.   
   
   3、I printed out the mod before and after the cudnn conv2d patten processing, 
and the names are all the same, such as fused_relax_nn_conv2d_cudnn."


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


yizhihenpidehou commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260730491

   1.
   root@ubuntu:/opt/SparseMatrixAnalysis/tvm# make VERBOSE=1
   Re-run cmake no build system arguments
   -- Forbidding undefined symbols in shared library, using -Wl,--no-undefined 
on platform Linux
   -- Build with RPC support...
   -- Build with Graph Executor support...
   -- Build with profiler...
   -- Build with AOT Executor support...
   -- Could NOT find GTest (missing: GTEST_LIBRARY GTEST_INCLUDE_DIR 
GTEST_MAIN_LIBRARY) 
   -- Build Alloc alignment set to 64
   -- Didn't find the path to CCACHE, disabling ccache
   -- VTA build with VTA_HW_PATH=/opt/SparseMatrixAnalysis/tvm/3rdparty/vta-hw
   -- Build VTA runtime with target: sim
   -- Build with contrib.random
   -- Build with contrib.sort
   -- Build with contrib.hybriddump
   -- Git found: /usr/bin/git
   -- Found TVM_GIT_COMMIT_HASH=4330c110550242571da017a1b15ae0b765723ae8
   -- Found TVM_GIT_COMMIT_TIME=2024-07-28 10:02:22 +0530
   -- Could NOT find LIBBACKTRACE (missing: LIBBACKTRACE_STATIC_LIBRARY 
LIBBACKTRACE_INCLUDE_DIR) 
   -- Building libbacktrace from 3rdparty/libbacktrace
   -- Building with TVM Map...
   -- Build with thread support...
   -- Added "-fuse-ld=lld" to linker flags 
   -- Build without FlashInfer
   -- Configuring done
   -- Generating done
   -- Build files have been written to: /opt/SparseMatrixAnalysis/tvm/build
   make[1]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   /opt/cmake-3.18.0/bin/cmake -P 
/opt/SparseMatrixAnalysis/tvm/build/CMakeFiles/VerifyGlobs.cmake
   /opt/cmake-3.18.0/bin/cmake -S/opt/SparseMatrixAnalysis/tvm 
-B/opt/SparseMatrixAnalysis/tvm/build --check-build-system 
CMakeFiles/Makefile.cmake 0
   /opt/cmake-3.18.0/bin/cmake -E cmake_progress_start 
/opt/SparseMatrixAnalysis/tvm/build/CMakeFiles 
/opt/SparseMatrixAnalysis/tvm/build//CMakeFiles/progress.marks
   make  -f CMakeFiles/Makefile2 all
   make[2]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   make  -f CMakeFiles/project_libbacktrace.dir/build.make 
CMakeFiles/project_libbacktrace.dir/depend
   make[3]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   cd /opt/SparseMatrixAnalysis/tvm/build && /opt/cmake-3.18.0/bin/cmake -E 
cmake_depends "Unix Makefiles" /opt/SparseMatrixAnalysis/tvm 
/opt/SparseMatrixAnalysis/tvm /opt/SparseMatrixAnalysis/tvm/build 
/opt/SparseMatrixAnalysis/tvm/build 
/opt/SparseMatrixAnalysis/tvm/build/CMakeFiles/project_libbacktrace.dir/DependInfo.cmake
 --color=
   make[3]: Leaving directory '/opt/SparseMatrixAnalysis/tvm/build'
   make  -f CMakeFiles/project_libbacktrace.dir/build.make 
CMakeFiles/project_libbacktrace.dir/build
   make[3]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   make[3]: Nothing to be done for 'CMakeFiles/project_libbacktrace.dir/build'.
   make[3]: Leaving directory '/opt/SparseMatrixAnalysis/tvm/build'
   [  1%] Built target project_libbacktrace
   make  -f CMakeFiles/tvm_runtime_objs.dir/build.make 
CMakeFiles/tvm_runtime_objs.dir/depend
   make[3]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   cd /opt/SparseMatrixAnalysis/tvm/build && /opt/cmake-3.18.0/bin/cmake -E 
cmake_depends "Unix Makefiles" /opt/SparseMatrixAnalysis/tvm 
/opt/SparseMatrixAnalysis/tvm /opt/SparseMatrixAnalysis/tvm/build 
/opt/SparseMatrixAnalysis/tvm/build 
/opt/SparseMatrixAnalysis/tvm/build/CMakeFiles/tvm_runtime_objs.dir/DependInfo.cmake
 --color=
   make[3]: Leaving directory '/opt/SparseMatrixAnalysis/tvm/build'
   make  -f CMakeFiles/tvm_runtime_objs.dir/build.make 
CMakeFiles/tvm_runtime_objs.dir/build
   make[3]: Entering directory '/opt/SparseMatrixAnalysis/tvm/build'
   [  1%] Building CXX object 
CMakeFiles/tvm_runtime_objs.dir/src/runtime/c_runtime_api.cc.o
   /usr/bin/c++ -DDMLC_USE_FOPEN64=0 
-DDMLC_USE_LOGGING_LIBRARY="" -DNDEBUG -DNDEBUG=1 
-DTVM_INDEX_DEFAULT_I64=1 -DTVM_KALLOC_ALIGNMENT=64 
-DTVM_THREADPOOL_USE_OPENMP=0 -DTVM_USE_LIBBACKTRACE=1 -DUSE_FALLBACK_STL_MAP=0 
-I/opt/SparseMatrixAnalysis/tvm/include 
-I/opt/SparseMatrixAnalysis/tvm/build/libbacktrace/include 
-I/opt/SparseMatrixAnalysis/tvm/3rdparty/libcrc/include -isystem 
/opt/SparseMatrixAnalysis/tvm/3rdparty/dlpack/include -isystem 
/opt/SparseMatrixAnalysis/tvm/3rdparty/dmlc-core/include -isystem 
/opt/SparseMatrixAnalysis/tvm/3rdparty/rang/include -isystem 
/opt/SparseMatrixAnalysis/tvm/3rdparty/compiler-rt -isystem 
/opt/SparseMatrixAnalysis/tvm/3rdparty/picojson -std=c++17 -O2 -Wall -fPIC  -o 
CMakeFiles/tvm_runtime_objs.dir/src/runtime/c_runtime_api.cc.o -c 
/opt/SparseMatrixAnalysis/tvm/src/runtime/c_runtime_api.cc
   In file included from 
/opt/SparseMatrixAnalysis/tvm/include/tvm/runtime/ndarray.h:30:0,
from 
/opt/SparseMatrixAnalysis/tvm/include/tvm/runtime/device_api.h:28,
from 
/opt/SparseMatrixAnalysis/tvm/src/runtime/c_runtime_api.cc:27:
   

Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2260713905

   @Lunderberg @Cookiee235
   hi, When I try to use **cudnn dispatch**, I encounter the same problem as 
you during runtime.  After replacing with the code changes you submitted, I 
still have the same problem. Could you please give me some suggestions?
   
   Actual behavior:
   `vm::runtime::Optional const&, 
bool&>(tvm::runtime::String&, tvm::runtime::String&, 
picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&)
   at mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:196
 5: tvm::runtime::ObjectPtr 
tvm::runtime::ObjAllocatorBase::make_object const&, 
bool&>(tvm::runtime::String&, tvm::runtime::String&, 
picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&)
   at mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:72
 4: mlc::llm::serve::ModelImpl* 
tvm::runtime::SimpleObjAllocator::Handler::New const&, 
bool&>(tvm::runtime::SimpleObjAllocator*, tvm::runtime::String&, 
tvm::runtime::String&, picojson::object_with_ordered_keys const&, DLDevice&, 
tvm::runtime::Optional const&, bool&)
   at mlc-llm/3rdparty/tvm/include/tvm/runtime/memory.h:122
 3: mlc::llm::serve::ModelImpl::ModelImpl(tvm::runtime::String, 
tvm::runtime::String, picojson::object_with_ordered_keys, DLDevice, 
tvm::runtime::Optional const&, bool)
   at /mlc-llm/cpp/serve/model.cc:66
 2: mlc::llm::serve::FunctionTable::Init(tvm::runtime::String, DLDevice, 
picojson::object_with_ordered_keys, 
tvm::runtime::Optional)
   at /mlc-llm/cpp/serve/function_table.cc:133
 1: 
tvm::runtime::relax_vm::VirtualMachineImpl::_Init(tvm::runtime::TVMArgs, 
tvm::runtime::TVMRetValue*)
 0: tvm::runtime::relax_vm::VirtualMachineImpl::InitFuncPool()
 File "/mlc-llm/3rdparty/tvm/src/runtime/relax_vm/vm.cc", line 707
   InternalError: **Check failed: (func.defined()) is false: Error: Cannot find 
PackedFunc fused_relax_nn_conv2d_cudnn in either Relax VM kernel library, or in 
TVM runtime PackedFunc registry, or in global Relax functions of the VM 
executable**`


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2260700751

   @Lunderberg @Cookiee235
   hi, I encountered a similar issue. After replacing with the code changes you 
submitted, I still have the same problem. Could you please give me some 
suggestions?
   
   Actual behavior:
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Cannot find PackedFunc tir_zeros [tvm]

2024-07-31 Thread via GitHub


jpf888 commented on issue #17176:
URL: https://github.com/apache/tvm/issues/17176#issuecomment-2260699255

   @Lunderberg @Cookiee235   
   hi, I encountered a similar issue. After replacing with the code changes you 
submitted, I still have the same problem. Could you please give me some 
suggestions?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260691907

   Can you show the output from the following commands?
   
   * `make VERBOSE=1`: Show the full command used to compile the `analyzer.cc` 
file
   * `which g++`: Where the `g++` executable is located.
   * `echo | g++ -xc++ -E -v -`: Print out the defaults used by g++.  You'll 
want to look at the lines after `#include <...> search starts here` to see if 
it is looking for the new stdlib implementation.
   * `ldconfig --print-cache | grep libstdc`: Whether the linker is configured 
to find the new stdlib implementation.
   * `echo LD_LIBRARY_PATH=$LD_LIBRARY_PATH`: Whether the `LD_LIBRARY_PATH` is 
used to override the settings from `ldconfig`.
   
   
   It's possible that the newer version of g++ is still using the stdlib 
headers provided by the system's default version of g++, and these should help 
in determining if that's the case.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17216:
URL: https://github.com/apache/tvm/pull/17216#discussion_r1698614921


##
src/relax/op/op.cc:
##
@@ -331,8 +331,133 @@ RELAY_REGISTER_OP("relax.call_tir")
 .set_attr("FNormalize", NormalizeCallTIR)
 .set_attr("FPurity", Bool(true));
 
-Expr MakeCallTIR(Expr func, Tuple args, Array out_sinfo_list,
+static Array InferCallTIROutputStructInfo(Expr func, Tuple 
args,

Review Comment:
   Absolutely agreed that we should check for consistency after generating the 
IR, and that's something I want to add to the well-formed checker as well.  
This specific PR would be to avoid inconsistency while generating the IR.
   
   (And if we can't infer the output shape, then the output shape must still be 
be explicitly provided.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2260649071

   > For example, the above code is a valid tir call, but needs the output 
sinfo to be explicitly specified. Because we have such cases, and `call_tir` is 
a lower level function, it is safer to always ask for sinfo, but checks its 
consistency with the corresponding prim_func signature if needed
   
   That's a good point, and I agree that we should always be able to explicitly 
specify the output struct info, as output tensor shapes in TIR may define 
symbolic shapes.  However, I don't think it should a required argument.
   
   I've added a new test case, based on your example with `reshape`, to 
validate the behavior when the output shape cannot be inferred.  While the 
initial implementation did identify this failure and throw an error, the error 
message wasn't ideal.  I've added an earlier check for non-inferable output 
shapes, so that the error message can direct the user to provide the 
`out_sinfo` field.
   
   Does the udpated check/error messages address your concerns for this PR?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-31 Thread via GitHub


yizhihenpidehou commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2260594918

   > When you upgraded the GCC version, are you now compiling with the stdlib 
implementation provided by the new GCC version? This may require re-running 
cmake, if the old stdlib implementation is saved in `CMakeCache.txt`.
   
   I do not find 'stdlib' thing in  CMakeCache.txt, and my gcc version is shown 
below, my operate system is ubuntu16.0.4
   https://github.com/user-attachments/assets/8af26668-3ca0-4048-a59e-10dc412f8e18;>
   https://github.com/user-attachments/assets/8422f3be-e739-489f-a779-8c9ae2dadda6;>
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Validate tir::Buffer axis_separators on construction [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17219:
URL: https://github.com/apache/tvm/pull/17219#discussion_r1698533352


##
src/tir/ir/buffer.cc:
##
@@ -334,24 +334,38 @@ inline Array BufferOffset(const BufferNode* n, 
Array index,
   return offsets;
 }
 
-Buffer Buffer::GetFlattenedBuffer() const {
-  auto self = operator->();
-
+static void ValidateAxisSeparators(const Array& axis_separators, 
size_t buffer_dim) {
   // These checks ensure that all output axes contain at least one
   // input axis.
-  for (size_t i = 0; (i + 1) < self->axis_separators.size(); i++) {
-auto sep = self->axis_separators[i]->value;
-auto next_sep = self->axis_separators[i + 1]->value;
-ICHECK_LT(sep, next_sep) << "Axis separators must be in strictly 
increasing order.";
-  }
-  if (self->axis_separators.size()) {
-auto first_sep = self->axis_separators[0]->value;
-ICHECK_GT(first_sep, 0) << "First axis separator must be strictly greater 
than 0, "
-<< "so that first output axis contains at least 
one input axis";
-auto last_sep = self->axis_separators[self->axis_separators.size() - 
1]->value;
-ICHECK_LT(last_sep, self->shape.size())
-<< "Last output axis must contain at least one input axis.";
+  for (size_t i = 0; (i + 1) < axis_separators.size(); i++) {
+auto sep = axis_separators[i]->value;
+auto next_sep = axis_separators[i + 1]->value;
+CHECK_LT(sep, next_sep) << "ValueError: "
+<< "Axis separators must be in strictly increasing 
order, "
+<< "but axis_separators[" << i << "] = " << sep
+<< " is greater than or equal to axis_separators[" 
<< (i + 1)
+<< "] = " << next_sep << ".";
+  }
+  if (axis_separators.size()) {
+auto first_sep = axis_separators[0]->value;
+CHECK_GT(first_sep, 0) << "ValueError: "
+   << "First axis separator must be strictly greater 
than 0, "
+   << "so that first output axis contains at least one 
input axis.  "
+   << "However, the axis_separators[0] = " << 
first_sep;
+auto last_sep = axis_separators[axis_separators.size() - 1]->value;
+CHECK_LT(last_sep, buffer_dim)
+<< "ValueError: "
+<< "Last output axis must contain at least one input axis.  "
+<< "However, the axis_separators[" << (axis_separators.size() - 1) << 
"] = " << last_sep
+<< " does not leave any input axes between it and the buffer's 
dimensionality "
+<< buffer_dim;

Review Comment:
   I've updated the PR to loosen the condition on `axis_separators`, so they 
are only required to be increasing rather than strictly increasing.  This 
allows the buffer view to have the correct number of `axis_separators` for its 
physical dimension, even for a scalar buffer with `shape=[]`.



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Transform][Relax] Handle `is_group` argument in IPC AllReduce [tvm]

2024-07-31 Thread via GitHub


tqchen merged PR #17201:
URL: https://github.com/apache/tvm/pull/17201


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] Argument type mismatch: expected R.Tensor, given R.Tuple [tvm]

2024-07-31 Thread via GitHub


Cookiee235 opened a new issue, #17223:
URL: https://github.com/apache/tvm/issues/17223

   It seems the provided Relax IRs are valid, however, it crashed when was 
compiled using `relax.build()` unexpectedly.
   
   ### Actual behavior
   
   ```
   Traceback (most recent call last):
 File "test_simp.py", line 26, in 
   ex = relax.build(mod, target='llvm')  # crash here!
^^^
 File "/software/tvm-lunder/python/tvm/relax/vm_build.py", line 335, in 
build
   mod = pipeline(mod)
 ^
 File "/software/tvm-lunder/python/tvm/ir/transform.py", line 238, in 
__call__
   return _ffi_transform_api.RunPass(self, mod)
  ^
 File "/software/tvm-lunder/python/tvm/_ffi/_ctypes/packed_func.py", line 
240, in __call__
   raise_last_ffi_error()
 File "/software/tvm-lunder/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
 File "/software/tvm-lunder/python/tvm/relax/pipeline.py", line 101, in 
_pipeline
   mod = seq(mod)
 
 File "/software/tvm-lunder/python/tvm/ir/transform.py", line 238, in 
__call__
   return _ffi_transform_api.RunPass(self, mod)
  ^
 File "/software/tvm-lunder/python/tvm/_ffi/_ctypes/packed_func.py", line 
240, in __call__
   raise_last_ffi_error()
 File "/software/tvm-lunder/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 33: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string, 
std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
std::__cxx11::basic_string, std::allocator 
>, tvm::runtime::TVMRetValue)
 32: tvm::transform::Pass::operator()(tvm::IRModule) const
 31: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 30: tvm::transform::SequentialNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 29: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 28: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 27: _ZN3tvm7runtime13PackedFuncObj
 26: tvm::runtime::TypedPackedFunc::AssignTypedLambda(tvm::relax::transform::CallTIRRewrite()::{lambda(tvm::IRModule,
 tvm::transform::PassContext)#1})::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}::operator()(tvm::runtime::TVMArgs const, 
tvm::runtime::TVMRetValue) const
 25: tvm::relax::CallTIRMutator::Run()
 24: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
 23: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7r
 22: tvm::relax::ExprMutator::VisitExpr_(tvm::relax::FunctionNode const*)
 21: tvm::relax::ExprMutator::VisitWithNewScope(tvm::RelayExpr const&, 
tvm::runtime::Optional >)
 20: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
 19: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7r
 18: tvm::relax::ExprMutator::VisitExpr_(tvm::relax::SeqExprNode const*)
 17: tvm::relax::ExprMutator::VisitBindingBlock(tvm::relax::BindingBlock 
const&)
 16: 
tvm::relax::ExprMutator::VisitBindingBlock_(tvm::relax::BindingBlockNode const*)
 15: tvm::relax::ExprMutator::VisitBinding(tvm::relax::Binding const&)
 14: tvm::relax::ExprMutator::VisitBinding_(tvm::relax::VarBindingNode 
const*)
 13: tvm::relax::ExprMutator::VisitBinding_(tvm::relax::VarBindingNode 
const*, tvm::relax::ConstantNode const*)
 12: tvm::relax::ExprMutator::VisitExpr(tvm::RelayExpr const&)
 11: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7r
 10: tvm::relax::CallTIRMutator::VisitExpr_(tvm::relax::CallNode const*)
 9: tvm::relax::BlockBuilderImpl::Emit(tvm::RelayExpr, tvm::runtime::String)
 8: tvm::relax::BlockBuilderImpl::Emit(tvm::RelayExpr, bool, 
tvm::runtime::String)
 7: tvm::relax::Normalizer::Normalize(tvm::RelayExpr const&)
 6: tvm::relax::Normalizer::VisitExpr(tvm::RelayExpr const&)
 5: 
_ZZN3tvm5relax11ExprFunctorIFNS_9RelayExprERKS2_EE10InitVTableEvENUlRKNS_7r
 4: tvm::relax::Normalizer::VisitExpr_(tvm::relax::CallNode const*)
 3: tvm::relax::Normalizer::InferStructInfo(tvm::relax::Call const&)
 2: tvm::relax::DeriveCallRetStructInfo(tvm::relax::FuncStructInfo const&, 
tvm::relax::Call const&, tvm::relax::BlockBuilder const&, tvm::arith::Analyzer*)
 1: tvm::relax::CallRetStructInfoDeriver::Derive(tvm::relax::FuncStructInfo 
const&, tvm::relax::Call const&, tvm::relax::BlockBuilder const&)
 0: 

Re: [PR] [TIR] Validate tir::Buffer axis_separators on construction [tvm]

2024-07-31 Thread via GitHub


Lunderberg commented on code in PR #17219:
URL: https://github.com/apache/tvm/pull/17219#discussion_r1698515074


##
src/tir/ir/buffer.cc:
##
@@ -334,24 +334,38 @@ inline Array BufferOffset(const BufferNode* n, 
Array index,
   return offsets;
 }
 
-Buffer Buffer::GetFlattenedBuffer() const {
-  auto self = operator->();
-
+static void ValidateAxisSeparators(const Array& axis_separators, 
size_t buffer_dim) {
   // These checks ensure that all output axes contain at least one
   // input axis.
-  for (size_t i = 0; (i + 1) < self->axis_separators.size(); i++) {
-auto sep = self->axis_separators[i]->value;
-auto next_sep = self->axis_separators[i + 1]->value;
-ICHECK_LT(sep, next_sep) << "Axis separators must be in strictly 
increasing order.";
-  }
-  if (self->axis_separators.size()) {
-auto first_sep = self->axis_separators[0]->value;
-ICHECK_GT(first_sep, 0) << "First axis separator must be strictly greater 
than 0, "
-<< "so that first output axis contains at least 
one input axis";
-auto last_sep = self->axis_separators[self->axis_separators.size() - 
1]->value;
-ICHECK_LT(last_sep, self->shape.size())
-<< "Last output axis must contain at least one input axis.";
+  for (size_t i = 0; (i + 1) < axis_separators.size(); i++) {
+auto sep = axis_separators[i]->value;
+auto next_sep = axis_separators[i + 1]->value;
+CHECK_LT(sep, next_sep) << "ValueError: "
+<< "Axis separators must be in strictly increasing 
order, "
+<< "but axis_separators[" << i << "] = " << sep
+<< " is greater than or equal to axis_separators[" 
<< (i + 1)
+<< "] = " << next_sep << ".";
+  }
+  if (axis_separators.size()) {
+auto first_sep = axis_separators[0]->value;
+CHECK_GT(first_sep, 0) << "ValueError: "
+   << "First axis separator must be strictly greater 
than 0, "
+   << "so that first output axis contains at least one 
input axis.  "
+   << "However, the axis_separators[0] = " << 
first_sep;
+auto last_sep = axis_separators[axis_separators.size() - 1]->value;
+CHECK_LT(last_sep, buffer_dim)
+<< "ValueError: "
+<< "Last output axis must contain at least one input axis.  "
+<< "However, the axis_separators[" << (axis_separators.size() - 1) << 
"] = " << last_sep
+<< " does not leave any input axes between it and the buffer's 
dimensionality "
+<< buffer_dim;

Review Comment:
   I'm not sure the best way forward on it, as buffer views have always had a 
bit of this problem.  They have enough information to show/implement element 
access of a buffer, but some of the fields (e.g. `strides`) aren't used when 
lowering.  But at least for `strides`, the field can be set to a value that is 
consistent with the parent buffer.  For `axis_separators`, the view may not 
have sufficient information to determine the flattened shape.
   
   That said, being able to determine the physical dimensionality of a 
`tir::Buffer`, regardless of whether it is a view or the original, is useful.  
I like your idea of relaxing the `CHECK_GT` and `CHECK_LT` conditions to 
`CHECK_GE` and `CHECK_LE`, as it would let these cases be better expressed.  
That would allow a physical axis with extent 1 to be determined from an empty 
set of logical axes.
   
   (This change all came up from [this failing unit 
test](https://github.com/apache/tvm/pull/17219/files#diff-6c21ac3bae4a648708cb2b84c1e975dd9b80b25dada4fc964ed6a2e688b2913cR99).
  The `B_subregion0` buffer couldn't satisfy the constraints 
`len(view.axis_separators) < len(view.shape)` and `len(view.axis_separators) == 
len(backing_alloc_buf.axis_separators)` at the same time.)



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[I] [Bug] [Relax] Variable while_loop was used before its definition [tvm]

2024-07-31 Thread via GitHub


Cookiee235 opened a new issue, #17222:
URL: https://github.com/apache/tvm/issues/17222

   The Relax IR in the below test case passed the well-formed checking,  but 
failed when using the DCE unexpectedly!
   
   
   ### Actual behavior
   
   ```
   Traceback (most recent call last):
 File "test_sim.py", line 54, in 
   mod = tvm.relax.transform.DeadCodeElimination()(mod)
 ^^
 File "/software/tvm-lunder/python/tvm/ir/transform.py", line 238, in 
__call__
   return _ffi_transform_api.RunPass(self, mod)
  ^
 File "/software/tvm-lunder/python/tvm/_ffi/_ctypes/packed_func.py", line 
240, in __call__
   raise_last_ffi_error()
 File "/software/tvm-lunder/python/tvm/_ffi/base.py", line 481, in 
raise_last_ffi_error
   raise py_err
   tvm._ffi.base.TVMError: Traceback (most recent call last):
 18: 
tvm::runtime::PackedFuncObj::Extractor::AssignTypedLambda(tvm::transform::{lambda(tvm::transform::Pass, 
tvm::IRModule)#7}, std::__cxx11::basic_string, 
std::allocator >)::{lambda(tvm::runtime::TVMArgs const&, 
tvm::runtime::TVMRetValue*)#1}> >::Call(tvm::runtime::PackedFuncObj const*, 
std::__cxx11::basic_string, std::allocator 
>, tvm::runtime::TVMRetValue)
 17: tvm::transform::Pass::operator()(tvm::IRModule) const
 16: tvm::transform::Pass::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 15: tvm::transform::ModulePassNode::operator()(tvm::IRModule, 
tvm::transform::PassContext const&) const
 14: 
_ZN3tvm7runtime13PackedFuncObj9ExtractorINS0_16PackedFuncSubObjIZNS0_15TypedPackedFuncIFNS_8IRModuleES5_NS_9transform11PassContextEEE17AssignTypedLambdaIZNS_5relax9transform19DeadCodeEliminationENS0_5ArrayINS0_6StringEvEEEUlS5_S7_E_EEvT_EUlRKNS0_7TVMArgsEPNS0_11TVMRetValueEE_EEE4CallEPKS1_SI_SM_
 13: tvm::relax::DeadCodeElimination(tvm::IRModule const&, 
tvm::runtime::Array)
 12: tvm::relax::RemoveAllUnused(tvm::RelayExpr)
 11: tvm::relax::CollectVarUsage(tvm::RelayExpr const&)
 10: tvm::relax::UDChain::Collect(tvm::RelayExpr const&)
 9: tvm::relax::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
 8: tvm::relax::ExprVisitor::VisitExpr_(tvm::relax::FunctionNode const*)
 7: tvm::relax::ExprVisitor::VisitExpr(tvm::RelayExpr const&)
 6: tvm::relax::ExprVisitor::VisitExpr_(tvm::relax::SeqExprNode const*)
 5: tvm::relax::ExprVisitor::VisitBindingBlock(tvm::relax::BindingBlock 
const&)
 4: 
tvm::relax::ExprVisitor::VisitBindingBlock_(tvm::relax::BindingBlockNode const*)
 3: tvm::relax::ExprVisitor::VisitBinding(tvm::relax::Binding const&)
 2: tvm::relax::UDChain::VisitBinding_(tvm::relax::VarBindingNode const*)
 1: tvm::relax::ExprVisitor::VisitBinding_(tvm::relax::VarBindingNode 
const*)
 0: tvm::relax::UDChain::VisitVarDef(tvm::relax::Var const&)
 File "/software/tvm-lunder/src/relax/analysis/udchain.cc", line 75
   TVMError: Check failed: (!usage_map.count(var)) is false: Variable 
while_loop was used before its definition
   ```
   
   
   ### Steps to reproduce
   
   ```
   import tvm
   from tvm import relax
   from tvm.script import ir as I
   from tvm.script import tir as T
   from tvm.script import relax as R
   
   @I.ir_module
   class Module:
   @T.prim_func(private=True)
   def add(i: T.Buffer((), "int32"), c: T.Buffer((), "int32"), T_add: 
T.Buffer((), "int32")):
   T.func_attr({"tir.noalias": T.bool(True)})
   # with T.block("root"):
   with T.block("T_add"):
   vi = T.axis.spatial(1, T.int64(0))
   T.reads(i[()], c[()])
   T.writes(T_add[()])
   T_add[()] = i[()] + c[()]
   
   @T.prim_func(private=True)
   def add1(s: T.Buffer((T.int64(2), T.int64(3)), "float32"), x: 
T.Buffer((T.int64(2), T.int64(3)), "float32"), T_add: T.Buffer((T.int64(2), 
T.int64(3)), "float32")):
   T.func_attr({"tir.noalias": T.bool(True)})
   # with T.block("root"):
   for ax0, ax1 in T.grid(T.int64(2), T.int64(3)):
   with T.block("T_add"):
   v_ax0, v_ax1 = T.axis.remap("SS", [ax0, ax1])
   T.reads(s[v_ax0, v_ax1], x[v_ax0, v_ax1])
   T.writes(T_add[v_ax0, v_ax1])
   T_add[v_ax0, v_ax1] = s[v_ax0, v_ax1] + x[v_ax0, v_ax1]
   
   @R.function
   def main(x: R.Tensor((2, 3), dtype="float32")) -> R.Tensor((2, 3), 
dtype="float32"):
   cls = Module
   
   @R.function
   def while_loop(i: R.Tensor((), dtype="int32"), s: R.Tensor((2, 3), 
dtype="float32")) -> R.Tensor((2, 3), dtype="float32"):
   cond: R.Tensor((), dtype="bool") = 
R.call_pure_packed("test.vm.less", i, R.const(10, "int32"), 
sinfo_args=(R.Tensor((), dtype="bool"),))
   c: R.Tensor((), dtype="int32") = R.const(1, "int32")
   if cond:
   new_i = R.call_tir(cls.add, (i, c), 

Re: [I] [Relax][Bug] Segmentation fault when using the MergeCompositeFunctions transform [tvm]

2024-07-31 Thread via GitHub


Cookiee235 commented on issue #17120:
URL: https://github.com/apache/tvm/issues/17120#issuecomment-2260484522

   @Lunderberg Thanks a lot! The above test case can run correctly now! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]

2024-07-31 Thread via GitHub


Cookiee235 commented on issue #17175:
URL: https://github.com/apache/tvm/issues/17175#issuecomment-2260467890

   @Lunderberg Thanks for your PR and explanation. I got it! BTW, the initial 
test case can run correctly under the given PR #17218! 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] Replacing unary ops with LookUpTable and Take op to improve performance [tvm]

2024-07-31 Thread via GitHub


quic-sanirudh commented on code in PR #17214:
URL: https://github.com/apache/tvm/pull/17214#discussion_r1698096388


##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint

Review Comment:
   Should we use a better solution than looking at the global_var name to 
determine the type of op? Names might not be the most reliable way. Perhaps 
look for an attribute like `operator_name` similar to what is used by 
[`AlterOpImpl` 
pass](https://github.com/apache/tvm/blob/main/tests/python/relax/test_transform_alter_op_impl.py#L53)?



##
python/tvm/contrib/hexagon/generate_take_op.py:
##
@@ -0,0 +1,86 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+# pylint: disable=missing-docstring, invalid-name, unnecessary-comprehension, 
unused-argument
+
+import tvm
+import tvm.testing
+from tvm import relax
+from tvm.contrib.hexagon import hexagon_unary_ops
+
+
+def op_replace(call_node):
+def is_op(op_name: str, call_node: relax.Call) -> bool:
+if not isinstance(call_node, relax.Call):
+return False
+call_tir_op = tvm.ir.Op.get("relax.call_tir")
+if call_node.op != call_tir_op:
+return False
+global_var = call_node.args[0]
+return op_name in global_var.name_hint
+
+ops = ["tanh", "sqrt", "rsqrt", "exp", "erf", "sigmoid", "hardswish", 
"log", "abs"]
+for op in ops:
+if is_op(op, call_node):
+return True
+return False
+
+
+@relax.expr_functor.mutator
+class Tanh2TakeReplace(tvm.relax.PyExprMutator):
+def __init__(self, mod: tvm.IRModule) -> None:
+super().__init__(mod)
+self.mod_ = mod
+
+def transform(self) -> tvm.IRModule:
+# Iterate over all the nodes to check for the node replaceable
+for global_var, func in self.mod_.functions.items():
+# Skip non-relax functions
+if not isinstance(func, relax.Function):
+continue
+updated_func = self.visit_expr(func)
+self.builder_.normalize(updated_func)
+self.builder_.update_func(global_var, updated_func)
+# At the end of the transformation we return the updated IRModule from 
the BlockBuilder.
+return self.builder_.get()
+
+def visit_call_(self, call_node: relax.Call) -> relax.Call:
+if call_node.args[1][0].struct_info.dtype == "uint8":

Review Comment:
   Should we verify whether the call_node is a `relax.call_tir` op before 
accessing the args?



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [CI Problem] Missing AWS Key for S3 file cache storage [tvm]

2024-07-31 Thread via GitHub


tqchen closed issue #17019: [CI Problem] Missing AWS Key for S3 file cache 
storage
URL: https://github.com/apache/tvm/issues/17019


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Validate tir::Buffer axis_separators on construction [tvm]

2024-07-31 Thread via GitHub


quic-sanirudh commented on code in PR #17219:
URL: https://github.com/apache/tvm/pull/17219#discussion_r1698068199


##
src/tir/ir/buffer.cc:
##
@@ -334,24 +334,38 @@ inline Array BufferOffset(const BufferNode* n, 
Array index,
   return offsets;
 }
 
-Buffer Buffer::GetFlattenedBuffer() const {
-  auto self = operator->();
-
+static void ValidateAxisSeparators(const Array& axis_separators, 
size_t buffer_dim) {
   // These checks ensure that all output axes contain at least one
   // input axis.
-  for (size_t i = 0; (i + 1) < self->axis_separators.size(); i++) {
-auto sep = self->axis_separators[i]->value;
-auto next_sep = self->axis_separators[i + 1]->value;
-ICHECK_LT(sep, next_sep) << "Axis separators must be in strictly 
increasing order.";
-  }
-  if (self->axis_separators.size()) {
-auto first_sep = self->axis_separators[0]->value;
-ICHECK_GT(first_sep, 0) << "First axis separator must be strictly greater 
than 0, "
-<< "so that first output axis contains at least 
one input axis";
-auto last_sep = self->axis_separators[self->axis_separators.size() - 
1]->value;
-ICHECK_LT(last_sep, self->shape.size())
-<< "Last output axis must contain at least one input axis.";
+  for (size_t i = 0; (i + 1) < axis_separators.size(); i++) {
+auto sep = axis_separators[i]->value;
+auto next_sep = axis_separators[i + 1]->value;
+CHECK_LT(sep, next_sep) << "ValueError: "
+<< "Axis separators must be in strictly increasing 
order, "
+<< "but axis_separators[" << i << "] = " << sep
+<< " is greater than or equal to axis_separators[" 
<< (i + 1)
+<< "] = " << next_sep << ".";
+  }
+  if (axis_separators.size()) {
+auto first_sep = axis_separators[0]->value;
+CHECK_GT(first_sep, 0) << "ValueError: "
+   << "First axis separator must be strictly greater 
than 0, "
+   << "so that first output axis contains at least one 
input axis.  "
+   << "However, the axis_separators[0] = " << 
first_sep;
+auto last_sep = axis_separators[axis_separators.size() - 1]->value;
+CHECK_LT(last_sep, buffer_dim)
+<< "ValueError: "
+<< "Last output axis must contain at least one input axis.  "
+<< "However, the axis_separators[" << (axis_separators.size() - 1) << 
"] = " << last_sep
+<< " does not leave any input axes between it and the buffer's 
dimensionality "
+<< buffer_dim;

Review Comment:
   For a case like `axis_separators=[1, 2]` where the buffer is say 4d 
(NHWC/NCHW), both these checks would pass, but that might also be confusing as 
the user would expect 3 flattened dimensions with `axis_separators.size() == 
2`, but we get only 2 flatteneed dimensions. 
   
   Should we require atleast one valid axis between separators? Or do we allow 
it to be flattened into a single axis separator in this case?
   



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-30 Thread via GitHub


tqchen commented on PR #17216:
URL: https://github.com/apache/tvm/pull/17216#issuecomment-2259211572

   Just want to note that it is not always possible to do such inference.
   
   ```python
   class IRModule:
   @T.prim_func
   def reshape(A : Buffer((2, 4)), B: Buffer((n, m)):
   
   def main(A: Buffer((2, 4))):
lv0 = R.call_tir(reshape, [A], R.Tensor((1, 8)))
   ```
   
   For example, the above code is a valid tir call, but needs the output sinfo 
to be explicitly specified. Because we have such cases, and `call_tir` is a 
lower level function, it is safer to always ask for sinfo, but checks its 
consistency with the corresponding prim_func signature if needed


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [Relax] Allow `out_sinfo` to be omitted from `R.call_tir` [tvm]

2024-07-30 Thread via GitHub


tqchen commented on code in PR #17216:
URL: https://github.com/apache/tvm/pull/17216#discussion_r1697590272


##
src/relax/op/op.cc:
##
@@ -331,8 +331,133 @@ RELAY_REGISTER_OP("relax.call_tir")
 .set_attr("FNormalize", NormalizeCallTIR)
 .set_attr("FPurity", Bool(true));
 
-Expr MakeCallTIR(Expr func, Tuple args, Array out_sinfo_list,
+static Array InferCallTIROutputStructInfo(Expr func, Tuple 
args,

Review Comment:
   One thing to note is that this is not always possible to do such inference. 
Since it is possible to have tir functions like reshape, where the output shape 
is being explicitly specified via the destination. For the particular low-level 
call_tir op. I think it is safer to always ask for the sinfo, then explicitly 
checks the consistency to avoid error



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]

2024-07-30 Thread via GitHub


Lunderberg commented on issue #17175:
URL: https://github.com/apache/tvm/issues/17175#issuecomment-2259194413

   > I'm baffled. Do we need to explicitly call the ``FuseTIR` transform before 
compiling any model?
   
   Looking at the specific example, it looks like there are two distinct `lv` 
variables in the input.  One is produced by `R.tensor_to_shape`, while the 
other is produced by `R.call_pure_packed("vm.builtin.tensor_to_shape", ...)`.  
When `FuseTIR` is called, it internally performs dead-code elimination to 
remove any values that are no longer required after fusion, along with any 
no-longer-used PrimFunc implementations.  This has the side effect of removing 
the call to `R.tensor_to_shape(x)`, as its output is entirely unused.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] CodeGenVM cannot handle this intrinsic tensor_to_shape [tvm]

2024-07-30 Thread via GitHub


Lunderberg commented on issue #17175:
URL: https://github.com/apache/tvm/issues/17175#issuecomment-2259189681

   There's a few operators that don't have `FLegalize` implementations, and 
expect to be lowered/pattern-matched out prior to building.  Unfortunately, 
this results in very hard-to-interpret error messages when the lowering reaches 
the `CodeGenVM` step.
   
   For the initial error case, it should be fixed as a side effect of 
https://github.com/apache/tvm/pull/17218, as it adds a check for 
`R.tensor_to_shape` in `VMBuiltinLower`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [CI] Reduce logging level when checking if docker image exists [tvm]

2024-07-30 Thread via GitHub


Lunderberg opened a new pull request, #17221:
URL: https://github.com/apache/tvm/pull/17221

   Prior to this commit, the `image_exists` utility in 
`determine_docker_images.py` logged the full response for success, and the full 
HTTP error if an exception is caught.  However, this is the expected behavior 
when loading a docker image from `tlcpackstaging`, such as the current images 
tagged with `20240428-060115-0b09ed018`. Logging this fallback as an error 
makes it difficult to find the first actual error that occurred in CI.
   
   This commit updates these logging statments `logging.info` and 
`logging.exception` to instead use `logging.debug`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Relax][Bug] Segmentation fault when using the MergeCompositeFunctions transform [tvm]

2024-07-30 Thread via GitHub


Lunderberg commented on issue #17120:
URL: https://github.com/apache/tvm/issues/17120#issuecomment-2259052186

   I'd been hoping that this one would be resolved incidentally through 
https://github.com/apache/tvm/pull/17212, and while it did change the segfault 
to an exception, it didn't solve the root cause.  This bug should now be fixed 
with #17220.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



[PR] [Relax] Handle presence of R.call_tir in MergeCompositeFunctions [tvm]

2024-07-30 Thread via GitHub


Lunderberg opened a new pull request, #17220:
URL: https://github.com/apache/tvm/pull/17220

   Prior to this commit, use of `R.call_tir` in the input to 
`MergeCompositeFunctions` would result in a segfault, when attempting to 
determine the `Group*` that contains the `relax::GlobalVar` of the callee.
   
   This commit updates `MergeCompositeFunctions` to check for 
`relax::GlobalVar` and `relax::Tuple` instances.
   
   Closes https://github.com/apache/tvm/issues/17120


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [PR] [TIR] Validate tir::Buffer axis_separators on construction [tvm]

2024-07-30 Thread via GitHub


Lunderberg commented on PR #17219:
URL: https://github.com/apache/tvm/pull/17219#issuecomment-2259001685

   And it was definitely good to move the validation, as it exposed an 
inconsistency in how the metaschedule `sch.set_axis_separator` gets applied.  
When applied to a buffer view, which may have different extents/dimensionality 
than the backing allocation, it could produce a buffer with invalid 
`axis_separators` for its shape.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] fatal error: string_view: No such file or directory [tvm]

2024-07-30 Thread via GitHub


Lunderberg commented on issue #17209:
URL: https://github.com/apache/tvm/issues/17209#issuecomment-2258937142

   When you upgraded the GCC version, are you now compiling with the stdlib 
implementation provided by the new GCC version?  This may require re-running 
cmake, if the old stdlib implementation is saved in `CMakeCache.txt`.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] InternalError: Check failed: last_sep < self->shape.size() Last output axis must contain at least one input axis [tvm]

2024-07-30 Thread via GitHub


Cookiee235 commented on issue #17215:
URL: https://github.com/apache/tvm/issues/17215#issuecomment-2258718047

   > @Cookiee235 I've implemented #17219 to move the validation logic to the 
`tir::Buffer` constructor. For your test case, it should now be raised during 
the parsing of `Module`, rather than being delayed until very late in the 
`relax.build` pipeline.
   
   @Lunderberg Thanks for your quick fixing! The invalid IRs failed as expected 
during the parsing of `Module`. I'll close this issue when the related PR is 
merged.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



Re: [I] [Bug] [Relax] VMBuiltinLower expects bound value to be a ShapeExpr [tvm]

2024-07-30 Thread via GitHub


Cookiee235 commented on issue #17217:
URL: https://github.com/apache/tvm/issues/17217#issuecomment-2258668941

   @Lunderberg The test case also runs correctly under the given patch on my 
side! Thank you!
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org



  1   2   3   4   5   6   7   8   9   10   >