driazati opened a new issue, #11435:
URL: https://github.com/apache/tvm/issues/11435

   These tests were found to be flaky (intermittently failing on `main` or 
failed in a PR with unrelated changes). See [the 
docs](https://github.com/apache/tvm/blob/main/docs/contribute/ci.rst#handling-flaky-failures)
 for details.
   
   ### Tests(s)
   
     - 
`tests/python/frontend/paddlepaddle/test_forward.py::test_forward_group_norm`
   
   ### Jenkins Links
   
     - https://ci.tlcpack.ai/job/tvm/job/PR-11420/1/display/redirect
   
   looks like bumping the tolerances would fix it:
   
   ```
   AssertionError: 
   Not equal to tolerance rtol=1e-05, atol=1e-05
   Mismatched elements: 2 / 16 (12.5%)
   Max absolute difference: 3.6597252e-05
   Max relative difference: 2.2521937e-05
    x: array([[[[-1.327874]],
           [[-0.089664]],...
    y: array([[[[-1.327873]],
           [[-0.089664]],...
   Stacktrace
   @tvm.testing.uses_gpu
       def test_forward_group_norm():
           class GroupNorm(nn.Layer):
               def __init__(self, channels, groups):
                   super(GroupNorm, self).__init__()
                   self.group_norm = paddle.nn.GroupNorm(num_channels=channels, 
num_groups=groups)
       
               def forward(self, inputs):
                   return self.group_norm(inputs)
       
           input_shapes = [[1, 4, 6, 6], [2, 2, 4, 7], [2, 8, 1, 1]]
           for input_shape in input_shapes:
               num_channels = input_shape[1]
               input_data = paddle.uniform(input_shape)
               verify_model(GroupNorm(num_channels, 1), input_data)
   >           verify_model(GroupNorm(num_channels, 2), input_data)
   tests/python/frontend/paddlepaddle/test_forward.py:725: 
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   tests/python/frontend/paddlepaddle/test_forward.py:108: in verify_model
       tvm.testing.assert_allclose(baseline_output, compiled_output, rtol=rtol, 
atol=atol)
   _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ 
_ _ 
   actual = array([[[[-1.3278736 ]],
           [[-0.08966437]],
           [[ 1.4906697 ]],
           [[-0.07313112]],
           [[-0.3...        [[ 0.73773575]],
           [[ 0.8946406 ]],
           [[-1.6294222 ]],
           [[-0.00295429]]]], dtype=float32)
   desired = array([[[[-1.3278728 ]],
           [[-0.08966432]],
           [[ 1.4906689 ]],
           [[-0.07313108]],
           [[-0.3...        [[ 0.73773575]],
           [[ 0.8946406 ]],
           [[-1.6294222 ]],
           [[-0.00295429]]]], dtype=float32)
   rtol = 1e-05, atol = 1e-05
       def assert_allclose(actual, desired, rtol=1e-7, atol=1e-7):
           """Version of np.testing.assert_allclose with `atol` and `rtol` 
fields set
           in reasonable defaults.
       
           Arguments `actual` and `desired` are not interchangeable, since the 
function
           compares the `abs(actual-desired)` with `atol+rtol*abs(desired)`.  
Since we
           often allow `desired` to be close to zero, we generally want 
non-zero `atol`.
           """
           actual = np.asanyarray(actual)
           desired = np.asanyarray(desired)
           np.testing.assert_allclose(actual.shape, desired.shape)
   >       np.testing.assert_allclose(actual, desired, rtol=rtol, atol=atol, 
verbose=True)
   E       AssertionError: 
   E       Not equal to tolerance rtol=1e-05, atol=1e-05
   E       
   E       Mismatched elements: 2 / 16 (12.5%)
   E       Max absolute difference: 3.6597252e-05
   E       Max relative difference: 2.2521937e-05
   E        x: array([[[[-1.327874]],
   E       
   E               [[-0.089664]],...
   E        y: array([[[[-1.327873]],
   E       
   E               [[-0.089664]],...
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to