rebel-jonghewk opened a new issue, #15785:
URL: https://github.com/apache/tvm/issues/15785

   Thanks for participating in the TVM community! We use https://discuss.tvm.ai 
for any general usage questions and discussions. The issue tracker is used for 
actionable items such as feature proposals discussion, roadmaps, and bug 
tracking.  You are always welcomed to post on the forum first :smile_cat:
   
   Issues that are inactive for a period of time may get closed. We adopt this 
policy so that we won't lose track of actionable issues that may fall at the 
bottom of the pile. Feel free to reopen a new one if you feel there is an 
additional problem that needs attention when an old one gets closed.
   
   ### Expected behavior
   
   Able to compile
   
   ### Actual behavior
   
   InternalError: Check failed: (pval != nullptr) is false: Cannot allocate 
memory symbolic tensor shape [T.Any()]
   
   ### Environment
   
   TVM - 0.14.dev0
   Linux & MacOS
   
   ### Steps to reproduce
   
   ```
   import torch
   from tvm import relay
   import tvm
   
   arg1 = torch.randint(0, 8, (5,), dtype=torch.int32)
   arg_in = [arg1]
   model = lambda *args: torch.bincount(
       *args,
   )
   shape_list = [("input{}".format(i), (arg.shape, "int")) for i, arg in 
enumerate(arg_in)]
   scripted_model = torch.jit.trace(model, arg_in)
   
   mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
   
   target = tvm.target.Target("llvm", host="llvm")
   dev = tvm.cpu(0)
   with tvm.transform.PassContext(opt_level=3):
       lib = relay.build(mod, target=target, params=params)
   
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to