pig-pig-yang commented on issue #12469:
URL: https://github.com/apache/tvm/issues/12469#issuecomment-1219343173

   > In the Relay text dump, I'm seeing this
   > 
   > ```
   >   %57 = clip(%56, a_min=0f, a_max=6f) /* ty=Tensor[(1, 72, 1, 1), float32] 
*/;
   >   %58 = divide(%57, 6f /* ty=float32 */) /* ty=Tensor[(1, 72, 1, 1), 
float32] */;
   >   %59 = multiply(%58, %49) /* ty=Tensor[(1, 72, 28, 28), float32] */;
   >   %60 = nn.conv2d(%59, %features.4.block.3.0.weight, padding=[0, 0, 0, 0], 
channels=40, kernel_size=[1, 1]) /* ty=Tensor[(1, 40, 28, 28), float32] */;
   > ```
   > 
   > So it looks all right. @pig-pig-yang Please check your visualization. 
There is no conversion error.
   
   Thanks for the reply. Sorry I pasted the wrong codes. I rechecked the codes 
and the error did not occur during the front-end conversion. The error occurred 
after `tvm.relay.quantize.prerequisite_optimize`. The codes are:
   ```
   import tvm
   from tvm import te
   import tvm.relay as relay
   from tvm.contrib.download import download_testdata
   import numpy as np
   import os
   
   import torch
   import torchvision
   from torchvision import transforms
   
   import netron
   import onnx
   
   from tvm.relay.quantize import prerequisite_optimize
   
   ##################################################
   # Preparing relay networks.
   # ------------------------------------------------
   
   model_name = "mobilenet_v3"
   model = torchvision.models.mobilenet_v3_large(pretrained=True)
   model = model.eval()
   
   input_shape = [1, 3, 224, 224]
   input_data = torch.randn(input_shape)
   scripted_model = torch.jit.trace(model, input_data).eval()
   
   input_name = "input0"
   shape_list = [(input_name, (1, 3, 224, 224))]
   
   mod, params = relay.frontend.from_pytorch(scripted_model, shape_list)
   
   with tvm.transform.PassContext(opt_level=3):
        mod = prerequisite_optimize(mod, params)
   
   print(mod)
   ```
   
   The `prerequisite_optimize` is consisted of several commonly used pass:
   ```
   def prerequisite_optimize(mod, params=None):
       """Prerequisite optimization passes for quantization. Perform
       "SimplifyInference", "FoldScaleAxis", "FoldConstant", and
       "CanonicalizeOps" optimization before quantization."""
       optimize = tvm.transform.Sequential(
           [
               _transform.SimplifyInference(),
               _transform.FoldConstant(),
               _transform.FoldScaleAxis(),
               _transform.CanonicalizeOps(),
               _transform.FoldConstant(),
           ]
       )
   
       if params:
           mod["main"] = _bind_params(mod["main"], params)
   
       mod = optimize(mod)
       return mod
   ```
   Then I gets the wrong Relay structure like this:
     %66 = multiply(meta[relay.Constant][32] /* ty=Tensor[(40, 120, 1, 1), 
float32] */, %65) /* ty=Tensor[(40, 120, 1, 1), float32] */;
     %67 = nn.conv2d(%54, %66, padding=[0, 0, 0, 0], channels=40, 
kernel_size=[1, 1]) /* ty=Tensor[(1, 40, 28, 28), float32] */;
     %68 = add(%67, meta[relay.Constant][37] /* ty=Tensor[(40, 1, 1), float32] 
*/) /* ty=Tensor[(1, 40, 28, 28), float32] */;
   
   I haven't confirmed which pass causes this error, I will confirm later. 
Thanks!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscr...@tvm.apache.org

For queries about this service, please contact Infrastructure at:
us...@infra.apache.org

Reply via email to