pengzhao-intel commented on issue #16830: CI error in unix gpu test_quantization_gpu.test_quantized_conv URL: https://github.com/apache/incubator-mxnet/issues/16830#issuecomment-558862104 > Happening again: http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/master/1325/pipeline > > ``` > > ====================================================================== > > FAIL: test_quantization_mkldnn.test_quantized_conv > > ---------------------------------------------------------------------- > > Traceback (most recent call last): > > File "/usr/local/lib/python3.5/dist-packages/nose/case.py", line 198, in runTest > > self.test(*self.arg) > > File "/usr/local/lib/python3.5/dist-packages/nose/util.py", line 620, in newfunc > > return func(*arg, **kw) > > File "/work/mxnet/tests/python/mkl/../unittest/common.py", line 177, in test_new > > orig_test(*args, **kwargs) > > File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 277, in test_quantized_conv > > check_quantized_conv((3, 4, 28, 28), (3, 3), 128, (1, 1), (1, 1), False, qdtype) > > File "/work/mxnet/tests/python/mkl/../quantization/test_quantization.py", line 273, in check_quantized_conv > > assert cond == 0 > > AssertionError: > > -------------------- >> begin captured stdout << --------------------- > > skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet > > skipped testing quantized_conv for mkldnn cpu int8 since it is not supported yet > ``` > > @xinyu-intel @PatricZhao Anyone could take a look? Yes, @xinyu-intel is looking for this and the PR is under the testing.
---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services