tinywisdom opened a new issue, #18337:
URL: https://github.com/apache/tvm/issues/18337
### Expected behavior
When converting a `torch.export.exported` program to TVM Relax via
`from_exported_program`, if the model returns a tuple that contains both a
`Tensor` and a `None` (non-tensor) element, TVM hits an FFI segfault during
conversion/build.
Eager torch export works fine; the crash only happens inside the TVM
frontend / build pipeline.
### Actual behavior
```
torch==2.7.1a0+gite2d141d
tvm==0.21.0
tvm.relax.frontend.torch available: False
[step] from_exported_program ...
!!!!!!! TVM FFI encountered a Segfault !!!!!!!
File "<unknown>", in
tvm::relax::Tuple::Tuple(tvm::ffi::Array<tvm::RelaxExpr, void>, tvm::Span)
Segmentation fault (core dumped)
```
### Environment
+ OS: (Ubuntu 22.04.4 LTS (x86_64))
+ TVM version: (release v0.21.0)
+ Python: (3.10.16)
+ LLVM: (17.0.6)
### Steps to reproduce
```python
# tvm_relax_export_none_repro.py
import torch
import torch.nn as nn
def versions():
import tvm
from tvm import relax
print(f"torch=={torch.__version__}")
print(f"tvm=={tvm.__version__}")
print("tvm.relax.frontend.torch available:", hasattr(relax.frontend,
"torch"))
class Tiny(nn.Module):
def forward(self, x):
# Key: returns (Tensor, None) — tuple with non-tensor element
return x + 1, None
def repro():
import tvm
from tvm import relax
from tvm.relax.frontend.torch import from_exported_program
torch.manual_seed(0)
m = Tiny().eval()
x = torch.randn(2, 3)
# 1) torch.export
ep = torch.export.export(m, (x,))
# 2) Relax frontend
print("[step] from_exported_program ...")
mod = from_exported_program(ep)
# 3) build (usually crashes here)
print("[step] relax.build ...")
target = tvm.target.Target("llvm")
ex = relax.build(mod, target=target)
# 4) run (rarely reached)
print("[step] vm run ...")
vm = relax.VirtualMachine(ex, tvm.cpu(0))
y = vm["main"](tvm.nd.from_dlpack(torch.utils.dlpack.to_dlpack(x)))
print("OK, got outputs:", y)
if __name__ == "__main__":
versions()
repro()
```
### Triage
* needs-triage
* bug
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: [email protected]
For queries about this service, please contact Infrastructure at:
[email protected]