Re: [PR] Bump pillow from 9.3.0 to 10.0.1 in /apps/microtvm/cmsisnn [tvm]

2023-10-06 Thread via GitHub
lhutton1 merged PR #15866: URL: https://github.com/apache/tvm/pull/15866 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.

[tvm] branch main updated (e89b39e55b -> 6f04fdab5a)

2023-10-06 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git from e89b39e55b Bump pillow from 9.2.0 to 10.0.1 in /apps/microtvm (#15865) add 6f04fdab5a Bump pillow from 9.3.0 to 10.0.1

[tvm] branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.0.1 deleted (was 62a873f876)

2023-10-06 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch dependabot/pip/apps/microtvm/cmsisnn/pillow-10.0.1 in repository https://gitbox.apache.org/repos/asf/tvm.git was 62a873f876 Bump pillow from 9.3.0 to 10.0.1 in /apps/microtvm/cmsisnn The r

Re: [PR] Bump pillow from 9.3.0 to 10.0.1 in /apps/microtvm/ethosu [tvm]

2023-10-06 Thread via GitHub
lhutton1 merged PR #15867: URL: https://github.com/apache/tvm/pull/15867 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.

[tvm] branch main updated: Bump pillow from 9.3.0 to 10.0.1 in /apps/microtvm/ethosu (#15867)

2023-10-06 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new ec92ea960d Bump pillow from 9.3.0 to 10.0.1 in /apps/mi

[tvm] branch dependabot/pip/apps/microtvm/ethosu/pillow-10.0.1 deleted (was e623ba5adf)

2023-10-06 Thread lukhut
This is an automated email from the ASF dual-hosted git repository. lukhut pushed a change to branch dependabot/pip/apps/microtvm/ethosu/pillow-10.0.1 in repository https://gitbox.apache.org/repos/asf/tvm.git was e623ba5adf Bump pillow from 9.3.0 to 10.0.1 in /apps/microtvm/ethosu The rev

Re: [PR] [CI] Update ci-gpu image [tvm]

2023-10-06 Thread via GitHub
lhutton1 commented on PR #15836: URL: https://github.com/apache/tvm/pull/15836#issuecomment-1750182870 closes: #15754 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To uns

Re: [PR] [CI] Update ci-gpu image [tvm]

2023-10-06 Thread via GitHub
ashutosh-arm merged PR #15836: URL: https://github.com/apache/tvm/pull/15836 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apa

[tvm] branch main updated: [CI] Update ci-gpu image (#15836)

2023-10-06 Thread ashutoshp
This is an automated email from the ASF dual-hosted git repository. ashutoshp pushed a commit to branch main in repository https://gitbox.apache.org/repos/asf/tvm.git The following commit(s) were added to refs/heads/main by this push: new fa4aeee64e [CI] Update ci-gpu image (#15836) fa4aeee

Re: [I] [CI Problem] ci_gpu fails to build due to missing oneflow version [tvm]

2023-10-06 Thread via GitHub
lhutton1 commented on issue #15754: URL: https://github.com/apache/tvm/issues/15754#issuecomment-1750210468 fixed by https://github.com/apache/tvm/pull/15836 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL abov

Re: [I] [CI Problem] ci_gpu fails to build due to missing oneflow version [tvm]

2023-10-06 Thread via GitHub
lhutton1 closed issue #15754: [CI Problem] ci_gpu fails to build due to missing oneflow version URL: https://github.com/apache/tvm/issues/15754 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the sp

Re: [PR] [Docker] Add LLVM 17 to the LLVM install script [tvm]

2023-10-06 Thread via GitHub
lhutton1 commented on PR #15799: URL: https://github.com/apache/tvm/pull/15799#issuecomment-1750553096 @tvm-bot rerun -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To uns

Re: [PR] [CMSIS-NN] Move CMSIS_5 from SHA to release based upgrade [tvm]

2023-10-06 Thread via GitHub
ashutosh-arm commented on PR #15747: URL: https://github.com/apache/tvm/pull/15747#issuecomment-1750680629 This was dependent on https://github.com/apache/tvm/pull/15836, so re-trying now to see if CI works this time. -- This is an automated message from the Apache Git Service. To respond

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-06 Thread via GitHub
ekalda commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1750730364 I'm back from holiday and want to get this RFC moving again! Thanks for all the good discussion so far, I've made some changes to the RFC: * Use `vscale` directly instead of `vfactor` and

Re: [PR] [Unity] Propagate extra symbolic vars through LiftTransformParams [tvm]

2023-10-06 Thread via GitHub
Lunderberg commented on code in PR #15699: URL: https://github.com/apache/tvm/pull/15699#discussion_r1348897916 ## src/relax/transform/lift_transform_params.cc: ## @@ -43,26 +43,92 @@ struct LiftTransformParamsInfoPlan { lifted_bindings; // the bindings of the original f

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-06 Thread via GitHub
kparzysz-quic commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1751038124 Sorry for the delay... What I'm aiming at is to be able to lower the TIR to a generic CPU, that is to an architecture that does not support SVE. The TIR will need to have some defa

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-06 Thread via GitHub
Lunderberg commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1751100613 > What I'm aiming at is to be able to lower the TIR to a generic CPU, that is to an architecture that does not support SVE. The TIR will need to have some default lowering in CodeGenLLV

Re: [PR] [RFC] Scalable vectors in TIR [tvm-rfcs]

2023-10-06 Thread via GitHub
kparzysz-quic commented on PR #104: URL: https://github.com/apache/tvm-rfcs/pull/104#issuecomment-1751113275 > Could it instead be in a target-dependent lowering pass? Sure. My idea is to have a single SVE-aware vectorization pass in TVM, and then be able to utilize it for all target

Re: [I] [Bug][Unity] nn.Module external modules usage [tvm]

2023-10-06 Thread via GitHub
cyx-6 commented on issue #15805: URL: https://github.com/apache/tvm/issues/15805#issuecomment-1751243263 I see. Actually, we cannot apply `ExternModule` onto the original model. `ExternModule` is the new `nn.Module` interface, from `tvm.relax.frontend.nn`. But original llama model is from `

[PR] [Arith] Simplify the result of non-divisible floordiv [tvm]

2023-10-06 Thread via GitHub
vinx13 opened a new pull request, #15881: URL: https://github.com/apache/tvm/pull/15881 This is a follow-up of https://github.com/apache/tvm/pull/15665. It simplified the special case when `floormod(floordiv(x, lower_factor), ext) == x`. This prevent nested `IterMark` being created.

[PR] [Unity] Replace relax_vm/memory_manager with memory/memory_manager [Do Not Merge] [tvm]

2023-10-06 Thread via GitHub
yongwww opened a new pull request, #15882: URL: https://github.com/apache/tvm/pull/15882 We can unify the `runtime/relax_vm/memory_manager` and `runtime/memory/memory_manager` by replacing the former with the latter. DO NOT merge it, this PR should not be merged until PR #15833 is me

[PR] [Unity][Transform] Allow static Relax arguments to dynamic PrimFunc [tvm]

2023-10-06 Thread via GitHub
Lunderberg opened a new pull request, #15883: URL: https://github.com/apache/tvm/pull/15883 Prior to this commit, the `relax.transform.FuseTIR` transform required that the shapes arguments passed into a `PrimFunc` be structurally equivalent to the shapes of the parameters, and that any repl

Re: [PR] [Unity][Transform] Allow static Relax arguments to dynamic PrimFunc [tvm]

2023-10-06 Thread via GitHub
masahi commented on PR #15883: URL: https://github.com/apache/tvm/pull/15883#issuecomment-1751490974 cc @Hzfengsy -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubs

[PR] [Unity][Relax] Support Dynamic Tensor as Index, torch frontend [tvm]

2023-10-06 Thread via GitHub
guoyaol opened a new pull request, #15884: URL: https://github.com/apache/tvm/pull/15884 Add support: allow one dynamic tensor as torch.tensor's index In Stable Diffusion XL: e.g. `output = input1 [input2.argmax(dim = -1), ]` **Before:** we can only support int, slice, elips

Re: [I] [Bug][Unity] nn.Module external modules usage [tvm]

2023-10-06 Thread via GitHub
Cydia2018 closed issue #15805: [Bug][Unity] nn.Module external modules usage URL: https://github.com/apache/tvm/issues/15805 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To

Re: [I] [Bug][Unity] nn.Module external modules usage [tvm]

2023-10-06 Thread via GitHub
Cydia2018 commented on issue #15805: URL: https://github.com/apache/tvm/issues/15805#issuecomment-1751565834 @cyx-6 I get it, really thanks. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the sp

Re: [PR] [Unity][Relax] Support Dynamic Tensor as Index, torch frontend [tvm]

2023-10-06 Thread via GitHub
Hzfengsy commented on PR #15884: URL: https://github.com/apache/tvm/pull/15884#issuecomment-1751601521 Thanks for the fix! -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. T

[tvm] branch nightly updated (e89b39e55b -> fa4aeee64e)

2023-10-06 Thread github-bot
This is an automated email from the ASF dual-hosted git repository. github-bot pushed a change to branch nightly in repository https://gitbox.apache.org/repos/asf/tvm.git from e89b39e55b Bump pillow from 9.2.0 to 10.0.1 in /apps/microtvm (#15865) add 6f04fdab5a Bump pillow from 9.3.0 to

Re: [PR] [Unity][Transform] Allow static Relax arguments to dynamic PrimFunc [tvm]

2023-10-06 Thread via GitHub
Hzfengsy commented on code in PR #15883: URL: https://github.com/apache/tvm/pull/15883#discussion_r1349462821 ## src/relax/transform/fuse_tir.cc: ## @@ -39,31 +39,41 @@ namespace tir { */ class SymbolicMatcher : ExprFunctor { public: - explicit SymbolicMatcher(Map* var_rem

Re: [PR] [Unity][MSC][pre M1.2] Reconstruct codegen [tvm]

2023-10-06 Thread via GitHub
Hzfengsy merged PR #15813: URL: https://github.com/apache/tvm/pull/15813 -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.

[PR] [Unity][NN] Enhance ReLU and GELU support [tvm]

2023-10-06 Thread via GitHub
Hzfengsy opened a new pull request, #15885: URL: https://github.com/apache/tvm/pull/15885 This PR adds support for ReLU in NN module and op, also adds support for GELU in the NN modules. cc @jwfromm @junrushao -- This is an automated message from the Apache Git Service. To respond