discuss-archive
Thread
Date
Earlier messages
Later messages
Messages by Thread
Re: [PR] [Relax][Onnx] Pass output_padding param in ConvTranspose [tvm]
via GitHub
Re: [PR] [Relax][Onnx] Pass output_padding param in ConvTranspose [tvm]
via GitHub
[PR] [POC] Metal via tvm-ffi [tvm]
via GitHub
Re: [PR] [POC] Metal via tvm-ffi [tvm]
via GitHub
Re: [PR] [POC] Metal via tvm-ffi [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-31/switch-sections): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/2025-12-31/switch-sections): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/2025-12-31/switch-sections): Workflow run "CI" failed!
GitBox
[PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax] Add FRelaxInferLayout and TMixedPrecisionPolicy for dynamic_strided_slice [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix AssertionError: Unsupported function types `randn.default` [tvm]
via GitHub
[I] [Feature Request] support torch 1.14 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
Re: [I] [Feature Request] support torch 1.14/2.0.0 [tvm-ffi]
via GitHub
[PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add __bool__ support for Array and Map [tvm-ffi]
via GitHub
[PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
Re: [PR] [Relax] Move GetUsedVars to analysis module [tvm]
via GitHub
[PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
Re: [PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
Re: [PR] [Relax] Fix HardSigmoid returns 1.0 for NaN input [tvm]
via GitHub
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" is working again!
GitBox
[PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
Re: [PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
Re: [PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
Re: [PR] [KVCache] Enable sliding window for ragged prefill (`SelfAttention`) [tvm]
via GitHub
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/add-array-equality): Workflow run "CI" failed!
GitBox
[PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] feat: add equality and hash support for Array and Map [tvm-ffi]
via GitHub
Re: [PR] docs: document structural_equal and structural_hash for Array and Map [tvm-ffi]
via GitHub
Re: [PR] docs: document structural_equal and structural_hash for Array and Map [tvm-ffi]
via GitHub
[GH] (tvm-ffi/fix-array-negative-index-check): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/fix-compiler-warnings): Workflow run "CI" failed!
GitBox
[PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
Re: [PR] chore: fix compiler warnings [tvm-ffi]
via GitHub
[PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
Re: [PR] feat: add array __contains__ support [tvm-ffi]
via GitHub
[PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
Re: [PR] fix: add negative index bounds check in ArrayObj [tvm-ffi]
via GitHub
[PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: integer overflow in GetDataSize [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
Re: [PR] fix: add bounds checking for size() and stride() methods [tvm-ffi]
via GitHub
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" is working again!
GitBox
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/cpp_dtype): Workflow run "CI" failed!
GitBox
[PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
Re: [PR] [Feature] support C++ dtype_trait and Python-side mapping to C++ dtype [tvm-ffi]
via GitHub
[PR] doc: c++ toolchain [tvm-ffi]
via GitHub
Re: [PR] doc: c++ toolchain [tvm-ffi]
via GitHub
Re: [PR] doc: c++ toolchain [tvm-ffi]
via GitHub
Re: [PR] doc: c++ toolchain [tvm-ffi]
via GitHub
[PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
Re: [PR] doc: Reorder Sections in Python Packaging [tvm-ffi]
via GitHub
[PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
Re: [PR] [Relax] Add FInferMixedPrecision and FRelaxInferLayout for conv transpose ops [tvm]
via GitHub
[PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
Re: [PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
Re: [PR] [Fix] Fix typo in file header comment [tvm]
via GitHub
[PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
Re: [PR] [Relax] Fixes for metal to support bfloat16 [tvm]
via GitHub
[PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
Re: [PR] Fix TVMFFIEnvSetDLPackManagedTensorAllocator to correctly return the original allocator [tvm-ffi]
via GitHub
[PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
Re: [PR] [Relax][Op][PyTorch] Supported Median operator [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-23/doc-tensor): Workflow run "CI" is working again!
GitBox
[PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
[PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
Re: [PR] [Python][Relax] Fix YaRN correction dim calculation [tvm]
via GitHub
[GH] (tvm-ffi/feat/u64): Workflow run "CI" failed!
GitBox
[GH] (tvm-ffi/feat/u64): Workflow run "CI" failed!
GitBox
[PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
Re: [PR] [feat] Add overflow check for uint64_t/size_t in TypeTraits<Int>::CopyToAnyView [tvm-ffi]
via GitHub
[PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
Re: [PR] feat: Introduce `<tvm/ffi/tvm_ffi.h>` [tvm-ffi]
via GitHub
[PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
Re: [PR] [CUDA] Fix cuModuleUnload crash during interpreter shutdown [tvm]
via GitHub
[PR] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] [Relax] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
Re: [PR] [Relax] Add bfloat16 Support for Metal SIMD Matrix Operations [tvm]
via GitHub
[PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
Re: [PR] [Relax] Implement FRelaxInferLayout for tile operator [tvm]
via GitHub
[PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
Re: [PR] [Relax] Use weight shape instead of dim in Embedding.forward [tvm]
via GitHub
[GH] (tvm-ffi/2025-12-28/docs-release-process): Workflow run "CI" failed!
GitBox
[PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
Re: [PR] doc: Add release_process.rst [tvm-ffi]
via GitHub
[PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [PR] chore(release): Version bump after v0.1.7 release [tvm-ffi]
via GitHub
Re: [I] [Bug] `tvm_ffi.get_global_func_metdata` failed for functions that don't have metadata [tvm-ffi]
via GitHub
[I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
Re: [I] [RESULT][VOTE] Release Apache TVM FFI v0.1.7-rc0 [tvm-ffi]
via GitHub
[PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
Re: [PR] [Relax] Fix flaky test_conv2d gradient numeric test [tvm]
via GitHub
[PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
Re: [PR] [Relax][PyTorch] Fix PyTorch Dynamo frontend for Darwin compatibility [tvm]
via GitHub
[PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Re: [PR] [Relax] Add test case for op attributes in AST printer [tvm]
via GitHub
Earlier messages
Later messages