chrishkchris commented on a change in pull request #468: Distributted module URL: https://github.com/apache/incubator-singa/pull/468#discussion_r317889630
########## File path: python/singa/autograd.py ########## @@ -1286,25 +1287,26 @@ def set_params(self, **parameters): class _BatchNorm2d(Operation): - def __init__(self, handle, name=None): + def __init__(self, handle, running_mean, running_var, name=None): super(_BatchNorm2d, self).__init__(name) self.handle = handle + self.running_mean = running_mean.data + self.running_var = running_var.data - def forward(self, x, scale, bias, running_mean, running_var): - self.running_mean = running_mean - self.running_var = running_var + def forward(self, x, scale, bias): if training: if isinstance(self.handle, singa.CudnnBatchNormHandle): y, mean, var = singa.GpuBatchNormForwardTraining( - self.handle, x, scale, bias, running_mean, running_var + self.handle, x, scale, bias, self.running_mean, self.running_var Review comment: The following is the resnet18 training using CPU on CIFAR10 in the first few epochs. CPU is slow so I trained only a few epochs ``` ubuntu@ip-172-31-16-147:~/incubator-singa/examples/autograd$ python3 resnetcifarcpu.py Loading data file cifar-10-batches-py/data_batch_1 Loading data file cifar-10-batches-py/data_batch_2 Loading data file cifar-10-batches-py/data_batch_3 Loading data file cifar-10-batches-py/data_batch_4 Loading data file cifar-10-batches-py/data_batch_5 Loading data file cifar-10-batches-py/test_batch Start intialization............ Epoch=0: 100%|████████████████████████████████████████████████████████████████████████| 1562/1562 [2:09:57<00:00, 5.03s/it] Training loss = 2233.394769, training accuracy = 0.490297 Test accuracy = 0.636218 Epoch=1: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1562/1562 [2:10:00<00:00, 4.98s/it] Training loss = 1474.432049, training accuracy = 0.666633 Test accuracy = 0.678986 Epoch=2: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1562/1562 [2:10:11<00:00, 5.00s/it] Training loss = 1163.035850, training accuracy = 0.741717 Test accuracy = 0.738181 Epoch=3: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1562/1562 [2:10:31<00:00, 5.03s/it] Training loss = 979.977119, training accuracy = 0.782570 Test accuracy = 0.800581 Epoch=4: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1562/1562 [2:10:10<00:00, 4.98s/it] Training loss = 872.811802, training accuracy = 0.806098 Test accuracy = 0.813902 Epoch=5: 100%|███████████████████████████████████████████████████████████████████████████████████████████| 1562/1562 [2:10:05<00:00, 4.99s/it] Training loss = 782.525783, training accuracy = 0.826144 Test accuracy = 0.832232 ``` The training loss decreases normally. Therefore seems the CPU batch norm is working. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services