commits
Thread
Date
Later messages
Messages by Thread
Re: [PR] [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr [tvm]
via GitHub
Re: [PR] [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr [tvm]
via GitHub
Re: [PR] [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr [tvm]
via GitHub
Re: [PR] [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr [tvm]
via GitHub
Re: [PR] [Transform] De-duplicate MatchCast nodes in EliminateCommonSubexpr [tvm]
via GitHub
[PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
Re: [PR] [Transform][Bugfix] Handle non-composite lambda functions in FuseOps [tvm]
via GitHub
[PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
Re: [PR] [Transform] Implement relax.transform.ReorderPermuteDimsAfterConcat [tvm]
via GitHub
[PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Draft][Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Transform] Check for zero-param operators in LiftTransformParams [tvm]
via GitHub
[PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
Re: [PR] [Relax][Transform] Preserve param names in LiftTransformParams [tvm]
via GitHub
[PR] [Unity][TVMScript] Parse R.Object return type from call_pure_packed [tvm]
via GitHub
Re: [PR] [Unity][TVMScript] Parse R.Object return type from call_pure_packed [tvm]
via GitHub
[PR] [Relax] Handle dynamic arguments in legalization of nn.attention [tvm]
via GitHub
Re: [PR] [Relax] Handle dynamic arguments in legalization of nn.attention [tvm]
via GitHub
Re: [PR] [Relax] Handle dynamic arguments in legalization of nn.attention [tvm]
via GitHub
[PR] [Unity][Transform] Handle dynamic shapes in CombineParallelMatmul [tvm]
via GitHub
Re: [PR] [Unity][Transform] Handle dynamic shapes in CombineParallelMatmul [tvm]
via GitHub
Re: [PR] [Unity][Transform] Handle dynamic shapes in CombineParallelMatmul [tvm]
via GitHub
Re: [PR] [Unity][Transform] Handle dynamic shapes in CombineParallelMatmul [tvm]
via GitHub
[PR] [Unity][Transform] Check for permute_dims in ExpandMatmulOfSum [tvm]
via GitHub
Re: [PR] [Unity][Transform] Check for permute_dims in ExpandMatmulOfSum [tvm]
via GitHub
[PR] [Unity] Check for transpose and dynamic shape in AdjustMatmulOrder [tvm]
via GitHub
Re: [PR] [Unity] Check for transpose and dynamic shape in AdjustMatmulOrder [tvm]
via GitHub
Re: [PR] [Unity] Check for transpose and dynamic shape in AdjustMatmulOrder [tvm]
via GitHub
Re: [PR] [Unity] Check for transpose and dynamic shape in AdjustMatmulOrder [tvm]
via GitHub
Re: [PR] [Relax] Ignore non-relax functions in relax.transform.RunCodegen [tvm]
via GitHub
[PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
Re: [PR] [Arith] Provide tighter ConstIntBounds for special cases [tvm]
via GitHub
[PR] [Debug] Improve error message for codegen pattern mismatches [tvm]
via GitHub
[PR] [Unity][Analysis] Include impure call in VerifyWellFormed errors [tvm]
via GitHub
Re: [PR] [Unity][Analysis] Include impure call in VerifyWellFormed errors [tvm]
via GitHub
[PR] [Unity][TIR] Clear struct info when specializing PrimFunc [tvm]
via GitHub
Re: [PR] [Unity][TIR] Clear struct info when specializing PrimFunc [tvm]
via GitHub
Re: [PR] [Unity][TIR] Clear struct info when specializing PrimFunc [tvm]
via GitHub
Re: [PR] [Unity][TIR] Clear struct info when specializing PrimFunc [tvm]
via GitHub
Re: [PR] [Unity][TIR] Clear struct info when specializing PrimFunc [tvm]
via GitHub
[PR] [Unity][VM] Recursively visit match bindings in VMShapeLowerMutator [tvm]
via GitHub
[PR] [Doc] Change img path to remove mxnet dependency [tvm]
via GitHub
(tvm) branch main updated: [Marvell BYOC]: Marvell AI Accelerator Integration - Phase 1 (#16570)
syfeng
(tvm) branch main updated: [KVCache] Support mode "None" for Rotary Embebdding (#16580)
tqchen
(tvm) branch main updated: [MISC] Update the 3rdparty/libflash_attn submodule (#16576)
tqchen
(tvm) branch main updated: [KVCache] Support returning query positions (#16578)
tqchen
(tvm) branch nightly updated (274c368dba -> daa37e7e95)
github-bot
Re: [I] [Tracking Issue] Remove MXNet Dependency [tvm]
via GitHub
Re: [PR] [Unity] Add multinomial from uniform sample [tvm]
via GitHub
Re: [PR] [Unity] Add multinomial from uniform sample [tvm]
via GitHub
Re: [PR] [Unity] Add multinomial from uniform sample [tvm]
via GitHub
[PR] [KVCache] Support mode "None" for Rotary Embebdding [tvm]
via GitHub
Re: [PR] [KVCache] Support mode "None" for Rotary Embebdding [tvm]
via GitHub
[PR] [Dlight] Scheduling Low batch GEMM using GEMV-like rule [tvm]
via GitHub
Re: [PR] [Dlight] Scheduling Low batch GEMM using GEMV-like rule [tvm]
via GitHub
Re: [PR] [Dlight] Scheduling Low batch GEMM using GEMV-like rule [tvm]
via GitHub
Re: [PR] [Dlight] Scheduling Low batch GEMM using GEMV-like rule [tvm]
via GitHub
[PR] [KVCache] Support returning query positions [tvm]
via GitHub
Re: [PR] [KVCache] Support returning query positions [tvm]
via GitHub
Re: [PR] [KVCache] Support returning query positions [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Unity][Parser] Check well-formedness in the parser [tvm]
via GitHub
Re: [PR] [Relax] Additional unit tests for RemoveUnusedParameters [tvm]
via GitHub
Re: [PR] [Relax] Additional unit tests for RemoveUnusedParameters [tvm]
via GitHub
Re: [PR] [Relax] Additional unit tests for RemoveUnusedParameters [tvm]
via GitHub
Re: [PR] [Relax] Support callback as argument [tvm]
via GitHub
Re: [PR] [Marvell BYOC]: Marvell AI Accelerator Integration - Phase 1 [tvm]
via GitHub
Re: [PR] [Marvell BYOC]: Marvell AI Accelerator Integration - Phase 1 [tvm]
via GitHub
Re: [PR] [Marvell BYOC]: Marvell AI Accelerator Integration - Phase 1 [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
Re: [PR] [Relax] Implement operators to read runtime DLTensor* information [tvm]
via GitHub
[I] Possible issue with the "simplify pass" using the "propagate_knowns_to_simplify_expressions" flag [tvm]
via GitHub
Re: [I] [Bug] Possible issue with the "simplify pass" using the "propagate_knowns_to_simplify_expressions" flag [tvm]
via GitHub
Re: [I] [Bug] Possible issue with the "simplify pass" using the "propagate_knowns_to_simplify_expressions" flag [tvm]
via GitHub
(tvm) branch main updated (67bd739bed -> daa37e7e95)
tqchen
Re: [PR] [Relax][VM] Re-implementation of callback functions [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
Re: [I] [Bug] Tensorization breaks when TIR one dimension is a unit iterator [tvm]
via GitHub
[PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [MISC] Update the 3rdparty/libflash_attn submodule [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [VM] [Hexagon] Introduce 2D Discontiguous vtcm alloc tensor [tvm]
via GitHub
Re: [PR] [TIR][CUDA] Add native FP8 support to codegen [tvm]
via GitHub
Re: [PR] [TIR][CUDA] Add native FP8 support to codegen [tvm]
via GitHub
Re: [PR] [TIR][CUDA] Add native FP8 support to codegen [tvm]
via GitHub
Re: [PR] [TIR][CUDA] Add native FP8 support to codegen [tvm]
via GitHub
Re: [PR] [TIR][CUDA] Add native FP8 support to codegen [tvm]
via GitHub
(tvm) branch main updated: [MISC] Fix compile warnings (#16571)
tqchen
Later messages