joddiy commented on pull request #794:
URL: https://github.com/apache/singa/pull/794#issuecomment-690602081


   Hi, shicong, thanks for your code, it works fine for MatMul, however, it 
seems the `sub` has some problem now. please use the following test case to 
check:
   
   
   ```    
       def high_dim_helper(self, dev):
           configs = [
               # [(1, 12, 7, 64), (1, 12, 64, 7)],
               # [(1, 7, 768), (768, 768)],
               # generate test
               [(1), (1, 1, 1, 7)],
           ]
           ops = [
               # [np.add, autograd.add],
               [np.subtract, autograd.sub],
               # [np.matmul, autograd.matmul],
               # [np.divide, autograd.div],
           ]
           for config in configs:
               for op in ops:
                   X = np.random.random(config[0]).astype(np.float32)
                   x = tensor.from_numpy(X)
                   x.to_device(dev)
   
                   W = np.random.random(config[1]).astype(np.float32)
                   w = tensor.from_numpy(W)
                   w.to_device(dev)
   
                   y_t = op[0](X, W)
                   y = op[1](x, w)
                   np.testing.assert_array_almost_equal(tensor.to_numpy(y), 
y_t, 3)
   
       def test_high_dim_cpu(self):
           self.high_dim_helper(cpu_dev)
   
       @unittest.skipIf(not singa_wrap.USE_CUDA, 'CUDA is not enabled')
       def test_high_dim_gpu(self):
           self.high_dim_helper(gpu_dev)
   ```


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
[email protected]


Reply via email to