Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
quic-sanirudh commented on code in PR #16557: URL: https://github.com/apache/tvm/pull/16557#discussion_r1495450159 ## src/runtime/hexagon/hexagon_device_api.h: ## @@ -199,6 +215,9 @@ class HexagonDeviceAPI final : public DeviceAPI { //! \brief Hexagon power manager std::

Re: [PR] [Doc] Fixed Docstring usage example in `tvm.ir.make_node` [tvm]

2024-02-20 Thread via GitHub
felix-ro commented on PR #16610: URL: https://github.com/apache/tvm/pull/16610#issuecomment-1953756108 > LGTM, but is there a reason for the `tvm.runtime.String` addition. the FFI call should automatically handle if I'm not mistaken. I just checked the FFI call, and it is handled ther

Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-20 Thread via GitHub
lhutton1 merged PR #16523: URL: https://github.com/apache/tvm/pull/16523 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.

(tvm) branch main updated: [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr (#16523)

2024-02-20 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new a6157a6369 [SVE] Change the dtype of Ramp and Broadcast

Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
quic-sanirudh commented on PR #16557: URL: https://github.com/apache/tvm/pull/16557#issuecomment-1953795374 > Thanks @quic-sanirudh . I did another round provided all the feedbacks, let me know if they make sense Thanks a lot @tqchen for taking the time to review. I've made all the c

Re: [PR] [SVE] Change the dtype of Ramp and Broadcast lanes to PrimExpr [tvm]

2024-02-20 Thread via GitHub
lhutton1 commented on PR #16523: URL: https://github.com/apache/tvm/pull/16523#issuecomment-1953777295 Thanks @ekalda @tqchen @Lunderberg -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the speci

[PR] [SVE] Add support for scalable data type strings [tvm]

2024-02-20 Thread via GitHub
lhutton1 opened a new pull request, #16612: URL: https://github.com/apache/tvm/pull/16612 This commit adds support for representing scalable vectors using the string data type format. For example, "float32xvscalex4" may be used to represent the following scalable type: `DataType(kDLFloat, 3

Re: [PR] [AOT][Testing] Print output values on test failure [tvm]

2024-02-20 Thread via GitHub
ekalda commented on PR #16611: URL: https://github.com/apache/tvm/pull/16611#issuecomment-1953961974 Thanks @lhutton1, would you mind adding a test for this change? There is currently no path that exercises this option. -- This is an automated message from the Apache Git Service. To respo

[I] [CI Problem] TLCPack Docker image unable to set LLVM version [tvm]

2024-02-20 Thread via GitHub
Liam-Sturge opened a new issue, #16613: URL: https://github.com/apache/tvm/issues/16613 ### The Problem Intermittently, when running `./build_image.sh cpu`, the function `detect_llvm_version()` does not run correctly due to exceeding Github API rate limits. This then causes a complet

Re: [I] [CI Problem] TLCPack Docker image unable to set LLVM version [tvm]

2024-02-20 Thread via GitHub
Liam-Sturge closed issue #16613: [CI Problem] TLCPack Docker image unable to set LLVM version URL: https://github.com/apache/tvm/issues/16613 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the spec

Re: [I] [CI Problem] TLCPack Docker image unable to set LLVM version [tvm]

2024-02-20 Thread via GitHub
Liam-Sturge commented on issue #16613: URL: https://github.com/apache/tvm/issues/16613#issuecomment-1954039000 Raised issue on TLCPack repo instead: https://github.com/tlc-pack/tlcpack/issues/190 -- This is an automated message from the Apache Git Service. To respond to the message, pleas

Re: [PR] [RFC] Adding initial SVE implementation [tvm-rfcs]

2024-02-20 Thread via GitHub
lhutton1 commented on PR #18: URL: https://github.com/apache/tvm-rfcs/pull/18#issuecomment-1954039440 Closing as superseded by: https://github.com/apache/tvm-rfcs/pull/104 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and us

Re: [PR] [RFC] Adding initial SVE implementation [tvm-rfcs]

2024-02-20 Thread via GitHub
lhutton1 closed pull request #18: [RFC] Adding initial SVE implementation URL: https://github.com/apache/tvm-rfcs/pull/18 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To uns

Re: [I] [CI Problem] TLCPack Docker image unable to set LLVM version [tvm]

2024-02-20 Thread via GitHub
Liam-Sturge closed issue #16613: [CI Problem] TLCPack Docker image unable to set LLVM version URL: https://github.com/apache/tvm/issues/16613 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the spec

Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
quic-sanirudh commented on PR #16557: URL: https://github.com/apache/tvm/pull/16557#issuecomment-1954267571 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. T

Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
tqchen commented on PR #16557: URL: https://github.com/apache/tvm/pull/16557#issuecomment-1954405459 happy to get it in @quic-sanirudh to unblock folks, let us followup and try remove the phyiscal map by embedding them in buffer descriptor -- This is an automated message from the Apache G

Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
quic-sanirudh commented on PR #16557: URL: https://github.com/apache/tvm/pull/16557#issuecomment-1954410751 > happy to get it in @quic-sanirudh to unblock folks, let us followup and try remove the phyiscal map by embedding them in buffer descriptor Thanks a lot for understanding. I'll

Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on PR #16588: URL: https://github.com/apache/tvm/pull/16588#issuecomment-1955024693 Good point on checking the performance. I did a benchmark, with shown results shown in the plot below. The x-axis is the time required to run the analyzer with the `BoundUsingReciproca

Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on PR #16563: URL: https://github.com/apache/tvm/pull/16563#issuecomment-1955086643 All CI tests passing, and thank you for the review @slyubomirsky ! I'll follow up with another PR to add inspection of the remainder of the `DLTensor*` fields. -- This is an automate

Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]

2024-02-20 Thread via GitHub
Lunderberg merged PR #16563: URL: https://github.com/apache/tvm/pull/16563 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apach

(tvm) branch main updated: [Relax] Implement operators to read runtime DLTensor* information (#16563)

2024-02-20 Thread lunderberg
This is an automated email from the ASF dual-hosted git repository. lunderberg pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new b21855758e [Relax] Implement operators to read runt

Re: [PR] [Doc] Fixed Docstring usage example in `tvm.ir.make_node` [tvm]

2024-02-20 Thread via GitHub
tqchen merged PR #16610: URL: https://github.com/apache/tvm/pull/16610 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.or

(tvm) branch main updated: [Doc] Fixed Docstring usage example in `tvm.ir.make_node` (#16610)

2024-02-20 Thread tqchen
This is an automated email from the ASF dual-hosted git repository. tqchen pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new e5bfb028b9 [Doc] Fixed Docstring usage example in `tvm.

Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]

2024-02-20 Thread via GitHub
tqchen commented on PR #16588: URL: https://github.com/apache/tvm/pull/16588#issuecomment-1955244162 I still think in this case the overall constraining the behavior of ConstIntBound is more predictable and readable. My main worry is we open a flood gate of introducing many recursive

Re: [I] [Bug] Possible issue with the "simplify pass" using the "propagate_knowns_to_simplify_expressions" flag [tvm]

2024-02-20 Thread via GitHub
sdalvi-quic commented on issue #16577: URL: https://github.com/apache/tvm/issues/16577#issuecomment-1955530546 Thank you eric for pointing to the location which might be causing issue. Yes, the additional predicate is not getting appended to the constraints. -- This is an automated messa

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
vinx13 commented on code in PR #16598: URL: https://github.com/apache/tvm/pull/16598#discussion_r1496719908 ## src/relax/transform/fuse_ops.cc: ## @@ -1238,10 +1238,14 @@ class CompositeFunctionAnnotator : public ExprMutator { Expr VisitExpr_(const FunctionNode* func_node)

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on code in PR #16598: URL: https://github.com/apache/tvm/pull/16598#discussion_r1496737460 ## src/relax/transform/fuse_ops.cc: ## @@ -1238,10 +1238,14 @@ class CompositeFunctionAnnotator : public ExprMutator { Expr VisitExpr_(const FunctionNode* func_no

(tvm) branch main updated (e5bfb028b9 -> d91fe450c8)

2024-02-20 Thread wuwei
This is an automated email from the ASF dual-hosted git repository. wuwei pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git from e5bfb028b9 [Doc] Fixed Docstring usage example in `tvm.ir.make_node` (#16610) add d91fe450c8 [Transform][Bugfix] Handl

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
vinx13 merged PR #16598: URL: https://github.com/apache/tvm/pull/16598 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.or

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on code in PR #16598: URL: https://github.com/apache/tvm/pull/16598#discussion_r1496740578 ## src/relax/transform/fuse_ops.cc: ## @@ -1238,10 +1238,14 @@ class CompositeFunctionAnnotator : public ExprMutator { Expr VisitExpr_(const FunctionNode* func_no

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
vinx13 commented on code in PR #16598: URL: https://github.com/apache/tvm/pull/16598#discussion_r1496741848 ## src/relax/transform/fuse_ops.cc: ## @@ -1238,10 +1238,14 @@ class CompositeFunctionAnnotator : public ExprMutator { Expr VisitExpr_(const FunctionNode* func_node)

Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]

2024-02-20 Thread via GitHub
vinx13 commented on PR #16595: URL: https://github.com/apache/tvm/pull/16595#issuecomment-1955701786 `As a result, expressions such as R.zeros([16], "int32") would be extracted out into the parameter transformation, even though they do not depend on any parameters. ` Does this affect

Re: [PR] [Bugfix][TVMScript] Handle R.match_cast as last binding in if/else [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on PR #16562: URL: https://github.com/apache/tvm/pull/16562#issuecomment-1955842490 > I'm surprised that the previous logic failed. Yeah, I was pretty surprised at it, too. I think the core issue is the mismatch between the internal representation of a `relax::If

Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on PR #16595: URL: https://github.com/apache/tvm/pull/16595#issuecomment-1955860414 > Does this affect the result? If a weight transformation depends on some values like `R.zeros`, such transformation will no longer be lifted if `R.zeros` is not lifted, maybe better we

Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on code in PR #16598: URL: https://github.com/apache/tvm/pull/16598#discussion_r1496878192 ## src/relax/transform/fuse_ops.cc: ## @@ -1238,10 +1238,14 @@ class CompositeFunctionAnnotator : public ExprMutator { Expr VisitExpr_(const FunctionNode* func_no

Re: [PR] [TIR] Enhance and fix tensorize schedule for some case [tvm]

2024-02-20 Thread via GitHub
LeiWang1999 commented on code in PR #16560: URL: https://github.com/apache/tvm/pull/16560#discussion_r1496882770 ## src/tir/schedule/primitive/blockize_tensorize.cc: ## @@ -738,6 +739,28 @@ StmtSRef Blockize(ScheduleState self, const Array& blocks, bool preser return result;

Re: [PR] [Transform] Improvements to LazyTransformParams [tvm]

2024-02-20 Thread via GitHub
slyubomirsky commented on code in PR #16602: URL: https://github.com/apache/tvm/pull/16602#discussion_r1496856295 ## python/tvm/relax/transform/lazy_transform_params.py: ## @@ -157,24 +159,60 @@ def transform(self, func: relax.Function) -> relax.Function: self.memory_f

[I] [Bug] Tensorization Failure During Multilevel Tiling with Tensor Intrin [tvm]

2024-02-20 Thread via GitHub
zxybazh opened a new issue, #16614: URL: https://github.com/apache/tvm/issues/16614 ### Expected behavior MetaSchedule Tuning Works for the given Conv2d workload ### Actual behavior Triggers an error `ValueError: The block no longer exists in the IRModule` during applica

Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]

2024-02-20 Thread via GitHub
quic-sanirudh commented on PR #16557: URL: https://github.com/apache/tvm/pull/16557#issuecomment-1955892762 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. T

Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]

2024-02-20 Thread via GitHub
Lunderberg commented on PR #16588: URL: https://github.com/apache/tvm/pull/16588#issuecomment-1955895172 > My main worry is we open a flood gate of introducing many recursive rewrite patterns to `ConstIntBound` itself. Ah, I think I see where I may have miscommunicated. There isn't a

(tvm) branch nightly updated (460000202e -> d91fe450c8)

2024-02-20 Thread github-bot
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch nightly in repository https://gitbox.apache.org/repos/asf/tvm.git from 46202e [KVCache] Support passing in attn_score_scaling_factor into KV cache (#16606) add 2066ce9612 [Unity