This is an automated email from the ASF dual-hosted git repository.

junrushao pushed a commit to branch main
in repository https://gitbox.apache.org/repos/asf/tvm-ffi.git


The following commit(s) were added to refs/heads/main by this push:
     new 6d8b134  Allow handling of load-bearing compiler flags for dlpack 
(#231)
6d8b134 is described below

commit 6d8b134f5667c00f7d73fd648a2dd95bc63c8c75
Author: Jo Shields <[email protected]>
AuthorDate: Fri Nov 7 14:07:20 2025 -0500

    Allow handling of load-bearing compiler flags for dlpack (#231)
    
    This is... largely a follow-up to
    https://github.com/flashinfer-ai/flashinfer/pull/1801 introduced by the
    new dependency.
    
    It is fairly common in niche scenarios - e.g. anywhere involving
    embedded development or cross-compilation - for compiler options to be
    more than decorative. For example, I sped up our (QEMU-based) CI by 80%
    by using an x64 build of clang, inside a riscv64 build root, so Python
    would think "I am doing riscv64 native builds" but CXX would point to
    the x64 clang++ binary. Works great, but has a hard dependency on
    passing values for `--sysroot` and `--target`.
    
    Since adding the tvm-ffi dependency, this has been broken in flashinfer,
    as `import tvm_ffi` shells out to the dlpack build script without
    passing any load-bearing compiler flags.
    
    So... add support for those.
    
    Before:
    
    ```
    >>> import tvm_ffi
    Traceback (most recent call last):
      File 
"/tmp/tvm-ffi/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py", line 
835, in <module>
        main()
      File 
"/tmp/tvm-ffi/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py", line 
828, in main
        build_ninja(build_dir=str(build_dir))
      File "/tmp/tvm-ffi/python/tvm_ffi/cpp/extension.py", line 353, in 
build_ninja
        raise RuntimeError("\n".join(msg))
    RuntimeError: ninja exited with status 1
    stdout:
    [1/2] /x64-prefix/bin/clang++ -MMD -MF main.o.d -std=c++17 -fPIC -O2 
-DBUILD_WITH_CUDA -D_GLIBCXX_USE_CXX11_ABI=1 
-I/tmp/tvm-ffi/python/tvm_ffi/../../3rdparty/dlpack/include 
-I/usr/include/python3.12 
-I/tmp/flashinfer/.venv/lib/python3.12/site-packages/torch/include 
-I/tmp/flashinfer/.venv/lib/python3.12/site-packages/torch/include/torch/csrc/api/include
 -I/prefix/include -c /tmp/tvm-ffi-torch-c-dlpack-ei5_rkaa/addon.cc -o main.o
    FAILED: [code=1] main.o
    /x64-prefix/bin/clang++ -MMD -MF main.o.d -std=c++17 -fPIC -O2 
-DBUILD_WITH_CUDA -D_GLIBCXX_USE_CXX11_ABI=1 
-I/tmp/tvm-ffi/python/tvm_ffi/../../3rdparty/dlpack/include 
-I/usr/include/python3.12 
-I/tmp/flashinfer/.venv/lib/python3.12/site-packages/torch/include 
-I/tmp/flashinfer/.venv/lib/python3.12/site-packages/torch/include/torch/csrc/api/include
 -I/prefix/include -c /tmp/tvm-ffi-torch-c-dlpack-ei5_rkaa/addon.cc -o main.o
    In file included from /tmp/tvm-ffi-torch-c-dlpack-ei5_rkaa/addon.cc:2:
    In file included from 
/tmp/tvm-ffi/python/tvm_ffi/../../3rdparty/dlpack/include/dlpack/dlpack.h:35:
    In file included from /x64-prefix/lib/clang/21/include/stdint.h:56:
    /usr/include/stdint.h:26:10: fatal error: 'bits/libc-header-start.h' file 
not found
       26 | #include <bits/libc-header-start.h>
          |          ^~~~~~~~~~~~~~~~~~~~~~~~~~
    1 error generated.
    ninja: build stopped: subcommand failed.
    ```
    
    After:
    
    ```
    # export DLPACK_EXTRA_CFLAGS="--target=riscv64-unknown-linux-gnu 
--sysroot=/"
    # export DLPACK_EXTRA_LDFLAGS="--target=riscv64-linux-gnu --sysroot=/ 
-L/prefix/lib"
    >>> import tvm_ffi
    >>>
    ```
---
 .../utils/_build_optional_torch_c_dlpack.py        | 24 ++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py 
b/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py
index 4be433f..eb7f897 100644
--- a/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py
+++ b/python/tvm_ffi/utils/_build_optional_torch_c_dlpack.py
@@ -568,6 +568,22 @@ extern "C" DLL_EXPORT int64_t TorchDLPackExchangeAPIPtr() {
 """
 
 
+def parse_env_flags(env_var_name: str) -> list[str]:
+    env_flags = os.environ.get(env_var_name)
+    if env_flags:
+        try:
+            import shlex  # noqa: PLC0415
+
+            return shlex.split(env_flags)
+        except ValueError as e:
+            print(
+                f"Warning: Could not parse {env_var_name} with shlex: {e}. 
Falling back to simple split.",
+                file=sys.stderr,
+            )
+            return env_flags.split()
+    return []
+
+
 def _generate_ninja_build(
     build_dir: Path,
     libname: str,
@@ -791,6 +807,14 @@ def main() -> None:  # noqa: PLR0912, PLR0915
                 ldflags.append(f"-L{python_libdir}")
                 ldflags.append(f"-l{py_version}")
 
+        env_ldflags = parse_env_flags("TVM_FFI_JIT_EXTRA_LDFLAGS")
+        if env_ldflags:
+            ldflags.extend(env_ldflags)
+
+        env_cflags = parse_env_flags("TVM_FFI_JIT_EXTRA_CFLAGS")
+        if env_cflags:
+            cflags.extend(env_cflags)
+
         # generate ninja build file
         _generate_ninja_build(
             build_dir=build_dir,

Reply via email to