crawlingcub opened a new issue, #12625:
URL: https://github.com/apache/tvm/issues/12625

   I modiied the `test_forward_conv` test in 
`tests/python/frontend/pytorch/test_forward.py` to test a specific 
configuration for Conv2d. The test assertion fails showing large differences. 
Is this expected?
   
   I added the following test case:
   
   ```python
   @tvm.testing.uses_gpu
   def test_forward_conv():
   ...
    verify_model(
        torch.nn.Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)).eval(),
         input_data=torch.randn((1, 6, 14, 14)),
        )
   ```
   Interestingly the test passes for `Python3.7+PyTorch1.8.0`, but fails for 
`Python3.8+Pytorch 1.12.1`!
   
   Error:
   ```python
   >       verify_model(
               torch.nn.Conv2d(6, 16, kernel_size=(5, 5), stride=(1, 1)).eval(),
               input_data=torch.randn((1, 6, 14, 14)),
           )
   
   tests/python/frontend/pytorch/test_forward.py:1092:
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
   tests/python/frontend/pytorch/test_forward.py:204: in verify_model
       tvm.testing.assert_allclose(baseline_output, output, rtol=rtol, 
atol=atol)
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
   
   actual = array([[[[ 0.6142985 ,  1.0181227 ,  0.12569234, ..., -0.4939782 ,
              0.06732336, -0.47949016],
            [-0.9...       [-1.0163634 , -0.36315745, -0.4642239 , ...,  
0.3805333 ,
             -0.34176496,  0.39958018]]]], dtype=float32)
   desired = array([[[[ 0.6141924 ,  1.0183791 ,  0.12561005, ..., -0.49378175,
              0.06719445, -0.4797126 ],
            [-0.9...       [-1.0164539 , -0.36309463, -0.46419635, ...,  
0.3804193 ,
             -0.341265  ,  0.39938715]]]], dtype=float32)
   rtol = 1e-05, atol = 1e-05
   
       def assert_allclose(actual, desired, rtol=1e-7, atol=1e-7):
           """Version of np.testing.assert_allclose with `atol` and `rtol` 
fields set
           in reasonable defaults.
   
           Arguments `actual` and `desired` are not interchangeable, since the 
function
           compares the `abs(actual-desired)` with `atol+rtol*abs(desired)`.  
Since we
           often allow `desired` to be close to zero, we generally want 
non-zero `atol`.
           """
           actual = np.asanyarray(actual)
           desired = np.asanyarray(desired)
           np.testing.assert_allclose(actual.shape, desired.shape)
   >       np.testing.assert_allclose(actual, desired, rtol=rtol, atol=atol, 
verbose=True)
   E       AssertionError:
   E       Not equal to tolerance rtol=1e-05, atol=1e-05
   E
   E       Mismatched elements: 1169 / 1600 (73.1%)
   E       Max absolute difference: 0.00506739
   E       Max relative difference: 3.4411154
   E        x: array([[[[ 0.614299,  1.018123,  0.125692, ..., -0.493978,  
0.067323,
   E                 -0.47949 ],
   E                [-0.96468 , -0.363319,  0.468453, ...,  0.423603,  
0.420821,...
   E        y: array([[[[ 0.614192,  1.018379,  0.12561 , ..., -0.493782,  
0.067194,
   E                 -0.479713],
   E                [-0.964656, -0.363339,  0.468504, ...,  0.423627,  
0.420996,...
   python/tvm/testing/utils.py:119: AssertionError
   ```
   ### Expected behavior
   
   Test should pass with this configuration
   
   ### Environment
   
   ```
   torch==1.12.1
   torchvision==0.13.1
   python 3.8.13
   Ubuntu 18.04
   ```
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to