szha commented on a change in pull request #7772: Use memcopy instead of 
assigning each individual element
URL: https://github.com/apache/incubator-mxnet/pull/7772#discussion_r137418825
 
 

 ##########
 File path: src/operator/tensor/cast_storage-inl.h
 ##########
 @@ -120,9 +119,7 @@ struct CastStorageRspDnsKernel {
     IType rid = idx[i];
     dim_t dns_offset = rid * row_length;
     dim_t rsp_offset = i * row_length;
-    for (dim_t col = 0; col < row_length; col++) {
-      dns[dns_offset + col] = data[rsp_offset + col];
-    }
+    memcpy(dns + dns_offset, data + rsp_offset, sizeof(DType) * row_length);
 
 Review comment:
   Did a bit of searching on this and the following turns up:
   1. memcpy seems to work in 
https://stackoverflow.com/questions/10456728/is-there-an-equivalent-to-memcpy-that-works-inside-a-cuda-kernel
 (might be outdated)
   2. cudaMemcpyAsync on device is available since CUDA6, though there are 
[limitations](http://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#ixzz4rwhsHEfu)
   ```
   Notes about all memcpy/memset functions:
   Only async memcpy/set functions are supported
   Only device-to-device memcpy is permitted
   May not pass in local or shared memory pointers
   ```
 
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to