ThomasDelteil commented on a change in pull request #9583: use nd for accuracy calculation URL: https://github.com/apache/incubator-mxnet/pull/9583#discussion_r175508386
########## File path: python/mxnet/metric.py ########## @@ -380,23 +380,27 @@ def update(self, labels, preds): Parameters ---------- labels : list of `NDArray` - The labels of the data. + The labels of the data with class indices as values, one per sample. preds : list of `NDArray` - Predicted values. + Prediction values for samples. Each prediction value can either be the class index, + or a vector of likelihoods for all classes. """ check_label_shapes(labels, preds) for label, pred_label in zip(labels, preds): if pred_label.shape != label.shape: pred_label = ndarray.argmax(pred_label, axis=self.axis) - pred_label = pred_label.asnumpy().astype('int32') - label = label.asnumpy().astype('int32') + pred_label = pred_label.astype('int32') + label = label.astype('int32') check_label_shapes(label, pred_label) - self.sum_metric += (pred_label.flat == label.flat).sum() - self.num_inst += len(pred_label.flat) + if pred_label.context != label.context: + pred_label = pred_label.as_in_context(label.context) + + self.sum_metric += (pred_label.flatten() == label.flatten()).sum().asscalar() Review comment: asscalar is equivalent to : asnumpy()[0] This PR did not solve the problem of the metric being computed in the numpy world ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services