discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
[I] HTML5-Responsive-Portfolio: 个人作品集模板臃附 [tvm]
via GitHub
[I] Supabase-Edge-Functions: 无服务器函数示例涛似 [tvm]
via GitHub
[I] Webpack-Vue-Starter: 生产级Vue构建工具墓凡 [tvm]
via GitHub
[I] Streamlit-ML-Dashboard: ML可视化仪表盘即顺 [tvm]
via GitHub
[I] URL-Shortener-API: 短链接服务邑擦 [tvm]
via GitHub
[I] 工具与实用 (61-80)泌孛 [tvm]
via GitHub
[I] Node-Express-Mongo-API: RESTful API服务器谥泊 [tvm]
via GitHub
[I] HTML5-Responsive-Portfolio: 个人作品集模板焙耐 [tvm]
via GitHub
[I] Webpack-Vue-Starter: 生产级Vue构建工具瘸宗 [tvm]
via GitHub
[I] Flappy-Bird-Clone: Flappy Bird克隆梢涛 [tvm]
via GitHub
[I] Reinforcement-Learning-Gym: RL环境训练眯品 [tvm]
via GitHub
[I] Astro-Static-Site: 超快静态站点生成器潦磐 [tvm]
via GitHub
[I] Sudoku-Solver-Web: 数独求解器疵不 [tvm]
via GitHub
[I] Vite-React-TS-Boilerplate: TypeScript React脚手架巡苫 [tvm]
via GitHub
[I] [Bug] Tirx's `prim_func` handles the `and` operator inconsistently in positional vs keyword arguments of a function call [tvm]
via GitHub
Re: [I] [Bug] Tirx's `prim_func` handles the `and` operator inconsistently in positional vs keyword arguments of a function call [tvm]
via GitHub
Re: [I] [Bug] Tirx's `prim_func` handles the `and` operator inconsistently in positional vs keyword arguments of a function call [tvm]
via GitHub
Re: [I] [Bug] Tirx's `prim_func` handles the `and` operator inconsistently in positional vs keyword arguments of a function call [tvm]
via GitHub
[I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
Re: [I] [Tracking Issue][TFLite] Expand unit test coverage for supported non-quantized operators [tvm]
via GitHub
[PR] [Relax][TFLite] Add expected IRModule checks for conv2d, pool2d, and batch_matmul tests [tvm]
via GitHub
Re: [PR] [Relax][TFLite] Add expected IRModule checks for conv2d, pool2d, and batch_matmul tests [tvm]
via GitHub
Re: [PR] [TFLite][Frontend] Add expected IRModule checks for conv2d, pool2d, and batch_matmul tests [tvm]
via GitHub
[PR] [Frontend][ONNX] Support select_last_index for ArgMax and ArgMin [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Support select_last_index for ArgMax and ArgMin [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Support select_last_index for ArgMax and ArgMin [tvm]
via GitHub
[PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
Re: [PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
Re: [PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
Re: [PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
Re: [PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
Re: [PR] [Web] Fix static init order in WASM runtime to prevent GetKwargsObject crash [tvm]
via GitHub
[PR] [DRAFT][DO NOT MERGE] Bump tvm-ffi to 1fed0a [tvm]
via GitHub
Re: [PR] Bump tvm-ffi to 1fed0a [tvm]
via GitHub
[GH] (tvm-ffi/container-patch): Workflow run "CI" failed!
GitBox
[PR] [fix] Move container stream scanning from Cython to C++ for efficiency [tvm-ffi]
via GitHub
Re: [PR] [fix] Move container stream scanning from Cython to C++ for efficiency [tvm-ffi]
via GitHub
[PR] [DOC] Fix various issues [tvm]
via GitHub
Re: [PR] [DOC] Fix various issues [tvm]
via GitHub
Re: [PR] [DOC] Fix various issues [tvm]
via GitHub
[GH] (tvm-ffi/dlpack-container-convert): Workflow run "CI" is working again!
GitBox
[PR] [Docs] Fix outdated code examples, types, and missing references across documentation [tvm]
via GitHub
Re: [PR] [Docs] Fix outdated code examples, types, and missing references across documentation [tvm]
via GitHub
Re: [PR] [Docs] Fix outdated code examples, types, and missing references across documentation [tvm]
via GitHub
[I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc2 [tvm-ffi]
via GitHub
[GH] (tvm-ffi/lazyinit): Workflow run "CI" is working again!
GitBox
[PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.dev0 on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.dev0 on main branch [tvm]
via GitHub
Re: [PR] [release][Dont Squash] Update version to 0.24.0 and 0.25.dev0 on main branch [tvm]
via GitHub
[PR] [Relax][ONNX] Support Resize dynamic ROI via TOPI [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Support Resize dynamic ROI via TOPI [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Support Resize dynamic ROI via TOPI [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Support Resize dynamic ROI via TOPI [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Support Resize dynamic ROI via TOPI [tvm]
via GitHub
[PR] [FFI][Reflection] Lazily resolve KWARGS sentinel in auto-init [tvm-ffi]
via GitHub
Re: [PR] [FFI][Reflection] Lazily resolve KWARGS sentinel in auto-init [tvm-ffi]
via GitHub
[PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
Re: [PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
Re: [PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
Re: [PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
Re: [PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
Re: [PR] [Web] Pre-allocate TypedArray views for pod args in WebGPU dispatch [tvm]
via GitHub
[PR] [WebGPU] Reserve additional keywords to avoid WGSL identifier collisions [tvm]
via GitHub
Re: [PR] [WebGPU] Reserve additional keywords to avoid WGSL identifier collisions [tvm]
via GitHub
Re: [PR] [WebGPU] Reserve additional keywords to avoid WGSL identifier collisions [tvm]
via GitHub
Re: [PR] [WebGPU] Reserve additional keywords to avoid WGSL identifier collisions [tvm]
via GitHub
[PR] [Fix] Replace str(target.kind) with target.kind.name for Target objects [tvm]
via GitHub
Re: [PR] [Fix] Replace str(target.kind) with target.kind.name for Target objects [tvm]
via GitHub
Re: [PR] [Fix] Replace str(target.kind) with target.kind.name for Target objects [tvm]
via GitHub
[PR] [Web] Fix rollup errors and bump tvmjs version [tvm]
via GitHub
Re: [PR] [Web] Fix rollup errors and bump tvmjs version [tvm]
via GitHub
Re: [PR] [Web] Fix rollup errors and bump tvmjs version [tvm]
via GitHub
Re: [PR] [Web] Fix rollup errors and bump tvmjs version [tvm]
via GitHub
[PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
Re: [PR] [FIX] Inline ceil_log2 in gpu_2d_continuous_cumsum to fix MakePackedAPI error [tvm]
via GitHub
[GH] (tvm-ffi/junrushao/2026-03-30/fix-publish-wheel-action-pin): Workflow run "CI" is working again!
GitBox
[PR] fix(ci): pin pypa/gh-action-pypi-publish to SHA for Apache allowlist [tvm-ffi]
via GitHub
Re: [PR] fix(ci): pin pypa/gh-action-pypi-publish to SHA for Apache allowlist [tvm-ffi]
via GitHub
Re: [PR] fix(ci): pin pypa/gh-action-pypi-publish to SHA for Apache allowlist [tvm-ffi]
via GitHub
Re: [PR] fix(ci): pin pypa/gh-action-pypi-publish to SHA for Apache allowlist [tvm-ffi]
via GitHub
[GH] (tvm-ffi/junrushao/2026-03-30/fix-publish-wheel-action-pin): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/junrushao/2026-03-30/fix-publish-wheel-action-pin): Workflow run "CI" failed!
GitBox
[PR] [Relax][ONNX] Complete ShapeExpr reshape handling in ONNX frontend [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Complete ShapeExpr reshape handling in ONNX frontend [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Complete ShapeExpr reshape handling in ONNX frontend [tvm]
via GitHub
[PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Squeeze`/`Unsqueeze` and `Slice` [tvm]
via GitHub
[PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Unsqueeze`/`Squeeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Unsqueeze`/`Squeeze` and `Slice` [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Fix shape/dynamic restrictions for `Unsqueeze`/`Squeeze` and `Slice` [tvm]
via GitHub
[GH] (tvm-ffi/dlpack-container-convert): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/dlpack-container-convert): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/dlpack-container-convert): Workflow run "CI" failed!
GitBox
[PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
Re: [PR] [FEAT] Recursive DLPack container conversion for auto torch.Tensor return [tvm-ffi]
via GitHub
[PR] [Docs] Align documentation with tirx/s_tir namespace split [tvm]
via GitHub
Re: [PR] [Docs] Align documentation with tirx/s_tir namespace split [tvm]
via GitHub
Re: [PR] [Docs] Align documentation with tirx/s_tir namespace split [tvm]
via GitHub
Re: [PR] [Docs] Align documentation with tirx/s_tir namespace split [tvm]
via GitHub
[I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
Re: [I] [VOTE] Release Apache TVM FFI v0.1.10-rc1 [tvm-ffi]
via GitHub
[PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add roi_pool op and MaxRoiPool frontend support [tvm]
via GitHub
[PR] [Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add MatMulInteger support to Relax ONNX frontend [tvm]
via GitHub
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for picomatch - Update #1297722541" failed!
GitBox
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for underscore - Update #1297722538" failed!
GitBox
[GH] (tvm/main): Workflow run "npm_and_yarn in /web for flatted - Update #1297722580" is working again!
GitBox
[PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
Re: [PR] [Relax][ONNX] Add Optional and MatMulInteger16 frontend support [tvm]
via GitHub
[GH] (tvm/main): Workflow run "pip in /docker/python for cryptography - Update #1297652158" is working again!
GitBox
[PR] Bump cryptography from 41.0.6 to 46.0.6 in /docker/python [tvm]
via GitHub
[PR] [Relax] Add conv3d_transpose and ONNX ConvTranspose 3D support [tvm]
via GitHub
Re: [PR] [Relax] Add conv3d_transpose and ONNX ConvTranspose 3D support [tvm]
via GitHub
Re: [PR] [Relax] Add conv3d_transpose and ONNX ConvTranspose 3D support [tvm]
via GitHub
Re: [PR] [Relax] Add conv3d_transpose and ONNX ConvTranspose 3D support [tvm]
via GitHub
[PR] [Docs] Add BasePyModule tutorial [tvm]
via GitHub
Re: [PR] [Docs] Add BasePyModule tutorial [tvm]
via GitHub
Re: [PR] [Docs] Add BasePyModule tutorial [tvm]
via GitHub
Re: [PR] [Docs] Add BasePyModule tutorial [tvm]
via GitHub
Re: [PR] [Docs] Add tutorial for mixing Python/PyTorch with TVM using BasePyModule [tvm]
via GitHub
Re: [PR] [Docs] Add tutorial for mixing Python/PyTorch with TVM using BasePyModule [tvm]
via GitHub
Re: [PR] [Docs] Add tutorial for mixing Python/PyTorch with TVM using BasePyModule [tvm]
via GitHub
[PR] [Frontend][ONNX] Add If operator support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add If operator support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add If operator support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add If operator support to Relax ONNX frontend [tvm]
via GitHub
Re: [PR] [Frontend][ONNX] Add If operator support to Relax ONNX frontend [tvm]
via GitHub
[I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Re: [I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Re: [I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Re: [I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Re: [I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Re: [I] [Tracking Issue][ONNX] Complete missing and limited operators in ONNX frontend [tvm]
via GitHub
Earlier messages
Later messages