commits
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [I] [Bug] Graph optimization model compilation error involving `Pad` operator [tvm]
via GitHub
Re: [I] [Bug] Graph optimization model compilation error involving `Pad` operator [tvm]
via GitHub
Re: [I] [Bug] Graph optimization model compilation error involving `Pad` operator [tvm]
via GitHub
Re: [I] [Bug] Graph optimization model compilation error involving `Pad` operator [tvm]
via GitHub
Re: [I] [Bug] Graph optimization model compilation error involving `Pad` operator [tvm]
via GitHub
[PR] [CUBLAS] Enable offloading of R.matmul + R.dequantize [tvm]
via GitHub
Re: [PR] [CUBLAS] Enable offloading of R.matmul + R.dequantize [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
Re: [PR] Restore "pytest.mark.gpu" for RELAX tests [tvm]
via GitHub
(tvm) branch main updated: [TVMScript] Optionally use `ruff format` instead of `black` (#16876)
sanirudh
(tvm) branch nightly updated (d4056ca795 -> 460f6f1d3e)
github-bot
(tvm) branch main updated: [QoL][Relax] Infer StructInfo for relax::Tuple on construction (#16860)
wuwei
(tvm) branch main updated: [QoL][Relax] Return well-formed IR from relax::Function::CreateEmpty (#16861)
wuwei
(tvm) branch main updated: [TVMScript][Bug] Add test case for missing symbolic bounds (#16877)
wuwei
[PR] [BYOC] Add layout check and update shape check for cublas FP8 BYOC [tvm]
via GitHub
Re: [PR] [BYOC] Add layout check and update shape check for cublas FP8 BYOC [tvm]
via GitHub
Re: [PR] [BYOC] Add layout check and update shape check for cublas FP8 BYOC [tvm]
via GitHub
Re: [PR] [BYOC] Add layout check and update shape check for cublas FP8 BYOC [tvm]
via GitHub
(tvm) branch main updated: [CUBLAS] Set fp32 compute and scale dtypes in fp16 matmul (#16892)
wuwei
[PR] [Dlight] Enhance vectorization for gpu matmul [tvm]
via GitHub
Re: [PR] [Dlight] Enhance vectorization for gpu matmul [tvm]
via GitHub
Re: [PR] [Dlight] Enhance vectorization for gpu matmul [tvm]
via GitHub
Re: [PR] [RUNTIME][VULKAN] Support total_global_memory [tvm]
via GitHub
(tvm) branch main updated: [RUNTIME][VULKAN] Support total_global_memory (#16890)
ruihangl
Re: [PR] [Target] Use LLVM target parser for determining Arm(R) A-Profile Architecture features [tvm]
via GitHub
(tvm) branch main updated: [CUBLAS][FP8] Support e4m3 gemm in cuBLAS BYOC (#16888)
wuwei
[PR] [SVE] Check for SVE target in func_attr from VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in func_attr from VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in func_attr from VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
Re: [PR] [SVE] Check for SVE target in VectorizeLoop [tvm]
via GitHub
[PR] [CUBLAS] Set fp32 compute and scale dtypes in fp16 matmul [tvm]
via GitHub
Re: [PR] [CUBLAS] Set fp32 compute and scale dtypes in fp16 matmul [tvm]
via GitHub
[I] [Bug] `MatMul` operator in TVM seems fragile [tvm]
via GitHub
Re: [I] [Bug] `MatMul` operator in TVM seems fragile [tvm]
via GitHub
(tvm) branch main updated: [Contrib] Enable fp16 for thrust sort (#16887)
tqchen
(tvm) branch main updated (e738f1d4f1 -> 95d6778908)
tqchen
(tvm) branch main updated (cdfdd0e4ec -> e738f1d4f1)
tqchen
[I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
Re: [I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
Re: [I] [Bug] Init block not discoverable after sch.blockize [tvm]
via GitHub
[PR] [CUBLAS][FP8] Support e4m3 gemm in cuBLAS BYOC [tvm]
via GitHub
Re: [PR] [CUBLAS][FP8] Support e4m3 gemm in cuBLAS BYOC [tvm]
via GitHub
[PR] [Contrib] Enable fp16 for thrust [tvm]
via GitHub
Re: [PR] [Contrib] Enable fp16 for thrust sort [tvm]
via GitHub
Re: [PR] [Relax][Frontend] Fix sort, argsort and topk in nn module [tvm]
via GitHub
Re: [PR] [Relax][Frontend] Fix sort, argsort and topk in nn module [tvm]
via GitHub
(tvm) branch nightly updated (a64d1f1cc3 -> d4056ca795)
github-bot
Re: [I] [Bug] InitCCLPerWorker Fails when using AMD GPU Bridge [tvm]
via GitHub
[PR] Bump sqlparse from 0.4.3 to 0.5.0 in /apps/microtvm [tvm]
via GitHub
(tvm) branch dependabot/pip/apps/microtvm/sqlparse-0.5.0 created (now 824003e6f5)
github-bot
[PR] [dlight] Add check for matmul dtype and fix reduction rule [tvm]
via GitHub
Re: [PR] [dlight] Add check for matmul dtype and fix reduction rule [tvm]
via GitHub
(tvm) branch main updated (f267691fa4 -> d4056ca795)
ekalda
(tvm) branch main updated (a64d1f1cc3 -> f267691fa4)
tqchen
[PR] [Relax] Stabilize relax pass mutation order [tvm]
via GitHub
Re: [PR] [Relax] Stabilize relax pass mutation order [tvm]
via GitHub
(tvm) branch nightly updated (64911ab5da -> a64d1f1cc3)
github-bot
(tvm) branch main updated: [TIR] Make T.reinterpret nop when dtype is the same (#16879)
tqchen
[PR] [Codegen][Debug] fix unnumbered reshape in graph executor [tvm]
via GitHub
(tvm) branch nightly updated (0a3fe22208 -> 64911ab5da)
github-bot
(tvm) tag v0.16.0.rc0 created (now 64969035fd)
ysh329
(tvm) tag v0.17.dev0 created (now d0cbb02e1d)
ysh329
(tvm) branch main updated: [Runtime] Implemented Datatype.itemsize() (#16880)
tqchen
(tvm) branch main updated (5c80691c81 -> d0cbb02e1d)
wuwei
(tvm) 02/02: [release] Update version to 0.17.dev0 on main branch
wuwei
(tvm) 01/02: [release] Update version to 0.16.0 on main branch
wuwei
(tvm) branch main updated: [Dlight] Enhance vectorization loading weight for gemv (#16878)
tqchen
Re: [PR] [Dlight] Enhance vectorization loading weight for gemv [tvm]
via GitHub
[PR] [WIP][release][Dont Squash] Update version to 0.16.0 and 0.17.0.dev on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.16.0 and 0.17.0.dev on main branch [tvm]
via GitHub
(tvm) branch nightly updated (88a1c6560c -> 0a3fe22208)
github-bot
[PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
Re: [PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
Re: [PR] [Runtime] Implemented Datatype.itemsize() [tvm]
via GitHub
[PR] [TIR] Make T.reinterpret nop when dtype is the same [tvm]
via GitHub
Re: [PR] [TIR] Make T.reinterpret nop when dtype is the same [tvm]
via GitHub
[PR] [TVMScript][Bug] Add test case for missing symbolic bounds [tvm]
via GitHub
Re: [PR] [TVMScript][Bug] Add test case for missing symbolic bounds [tvm]
via GitHub
(tvm) branch main updated: [Relax] Enhance symbolic expr estimation in memory planning (#16872)
tqchen
[PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
Re: [PR] [TVMScript] Optionally use `ruff format` instead of `black` [tvm]
via GitHub
(tvm) branch main updated: [Thrust] Fix thrust workspace allocation (#16873)
tqchen
(tvm) branch nightly updated (f9e36fcbf8 -> 88a1c6560c)
github-bot
(tvm) branch dependabot/pip/apps/microtvm/idna-3.7 created (now 557d185544)
github-bot
[PR] Bump idna from 3.4 to 3.7 in /apps/microtvm [tvm]
via GitHub
(tvm) branch dependabot/pip/docker/python/idna-3.7 created (now 4fdd576c8a)
github-bot
[PR] Bump idna from 3.3 to 3.7 in /docker/python [tvm]
via GitHub
(tvm) branch main updated: [3rdparty] Bump flashinfer (#16868)
tqchen
(tvm) branch main updated: [PageKV] allow PopN to pop all the tokens in last block (#16871)
tqchen
[PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
Re: [PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
Re: [PR] [Thrust] Fix thrust workspace allocation [tvm]
via GitHub
[PR] [Relax] Enhance symbolic expr estimation in memory planning [tvm]
via GitHub
Re: [PR] [Relax] Enhance symbolic expr estimation in memory planning [tvm]
via GitHub
[PR] [RFC] Add NNEF frontend #108 [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [PR] [RFC] Add NNEF frontend [tvm-rfcs]
via GitHub
Re: [I] TVMError: Binary was created using cuda but a loader of that name is not registered [tvm]
via GitHub
Re: [I] TVMError: Binary was created using cuda but a loader of that name is not registered [tvm]
via GitHub
[PR] [PageKV] allow PopN to pop all the tokens in last block [tvm]
via GitHub
Re: [PR] [PageKV] allow PopN to pop all the tokens in last block [tvm]
via GitHub
(tvm) branch main updated: [OpenCL] Add OpenCL device for automatic target detection (#16854)
tqchen
[I] [Bug] Inconsistent Results between Direct Optimization and Sequential Optimization in TVM [tvm]
via GitHub
Re: [I] [Bug] Inconsistent Results between Direct Optimization and Sequential Optimization in TVM [tvm]
via GitHub
Re: [I] [Bug] Inconsistent Results between Direct Optimization and Sequential Optimization in TVM [tvm]
via GitHub
[I] [Bug] Error in compiling model after applying LazyGradientInit optimization [tvm]
via GitHub
Re: [I] [Bug] Error in compiling model after applying LazyGradientInit optimization [tvm]
via GitHub
(tvm) branch main updated: [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() (#16766)
lukhut
Re: [I] [Bug] VTA FSIM MacOS incompatibility [tvm]
via GitHub
Re: [I] [Bug] VTA FSIM MacOS incompatibility [tvm]
via GitHub
(tvm) branch nightly updated (4d4f0508a2 -> f9e36fcbf8)
github-bot
(tvm) branch main updated: [3rdparty] Bump FlashInfer (#16866)
wuwei
Re: [PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
[PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump flashinfer [tvm]
via GitHub
[PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
[PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
Re: [PR] [3rdparty] Bump FlashInfer [tvm]
via GitHub
(tvm) branch main updated: [Relax] Dispatch sort/scan for non-cuda gpu backends (#16867)
yongwww
Re: [PR] [Relax] Dispatch sort/scan for non-cuda gpu backends [tvm]
via GitHub
(tvm) branch main updated (2829b59e1c -> 6748215b42)
tqchen
(tvm) branch main updated: [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 (#16864)
tqchen
(tvm) branch main updated (95cb0de27a -> a482b4c191)
tqchen
(tvm) branch main updated: [VULKAN] Fix CLZ support for Vulkan (#16858)
tqchen
(tvm) branch nightly updated (a309b6b857 -> 4d4f0508a2)
github-bot
[PR] Feat/fp8 broadcast [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
Re: [PR] [Codegen, CUDA] Add handling of fp8 broadcast / const [tvm]
via GitHub
[PR] [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 [tvm]
via GitHub
Re: [PR] [TVMScript] Add parser and printer support for e4m3/e5m2 fp8 [tvm]
via GitHub
[PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
Re: [PR] [Picojson] Let the key of objects in json be ordered by default [tvm]
via GitHub
[PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [SVE] Support splitting by vscale in `tir::split` and `te::split` [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
Re: [PR] [QoL][Relax] Use SeqExpr in IR types when SeqExpr is required [tvm]
via GitHub
[PR] [QoL][Relax] Infer StructInfo for relax::Tuple on construction [tvm]
via GitHub
Re: [PR] [QoL][Relax] Infer StructInfo for relax::Tuple on construction [tvm]
via GitHub
[PR] [QoL][Relax] Return well-formed IR from relax::Function::CreateEmpty [tvm]
via GitHub
Re: [PR] [QoL][Relax] Return well-formed IR from relax::Function::CreateEmpty [tvm]
via GitHub
(tvm) branch main updated: [SVE] Support scalable vectors in LoopVectorizer (#16782)
lukhut
[PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
Re: [PR] [VULKAN] Fix CLZ support for Vulkan [tvm]
via GitHub
[I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
Re: [I] [Release] v0.16.0 release schedule [tvm]
via GitHub
(tvm) branch nightly updated (81a850693d -> a309b6b857)
github-bot
(tvm) branch main updated: [Thrust] Use pointer to tls pool to prevent creating new pool (#16856)
wuwei
(tvm) branch main updated: [ONNX] Fix interpreting auto_pad parameters in ConvTranspose operator (#16001)
yongwww
[PR] [Thrust] Use pointer to tls pool to prevent creating new pool [tvm]
via GitHub
Re: [PR] [Thrust] Use pointer to tls pool to prevent creating new pool [tvm]
via GitHub
(tvm) branch main updated (81a850693d -> d1e24ca721)
tqchen
[I] [Bug] https://github.com/apache/tvm/blob/main/src/te/schedule/message_passing.cc#L415 [tvm]
via GitHub
Re: [I] [Bug] https://github.com/apache/tvm/blob/main/src/te/schedule/message_passing.cc#L417 [tvm]
via GitHub
Re: [PR] [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() [tvm]
via GitHub
Re: [PR] [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() [tvm]
via GitHub
Re: [PR] [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() [tvm]
via GitHub
Re: [PR] [BugFix][Target] Added null check to fix segfault at ->defined() in cpu.cc DetectSystemTriple() [tvm]
via GitHub
Earlier messages
Later messages