[GitHub] apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-27 Thread GitBox
apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r198675428
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -127,7 +137,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 #ifdef __CUDACC__
 template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,
-   Shape sshape, Shape stride) 
{
+   Shape sshape, Shape stride, 
float temperature) {
 
 Review comment:
   I have added const qualifier following your suggestion.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-21 Thread GitBox
apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197304019
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -145,23 +155,36 @@ __global__ void softmax_compute_kernel(DType *in, DType 
*out, index_t M, int axi
   __syncthreads();
 
   red::sum::SetInitValue(smem[x]);
-  for (index_t i = x; i < M; i += x_size) {
-red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+  if (temperature == 1.0) {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+}
+  } else {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf((in[base + i*sa] - 
smax)/temperature)));
+}
   }
+
   __syncthreads();
   cuda::Reduce1D(smem);
   __syncthreads();
   DType ssum = smem[0];
   __syncthreads();
 
-  for (index_t i = x; i < M; i += x_size) {
-out[base + i*sa] = OP::Map(in[base + i*sa] - smax, ssum);
+  if (temperature == 1.0) {
 
 Review comment:
   I agree with you about the overhead of the branching. However, there is a 
trade off in performance over complexity here. It really depends on how 
critical this piece of computation is to the overall performance of the 
network. In this case, I would assume softmax is a function called very often 
in the last layer of a neural network. But I am not an expert to oversee the 
performance impact. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-21 Thread GitBox
apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197303252
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -127,7 +137,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 #ifdef __CUDACC__
 template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,
-   Shape sshape, Shape stride) 
{
+   Shape sshape, Shape stride, 
float temperature) {
 
 Review comment:
   Thanks for your suggestion. But I do not seem to find the convention of 
using const for pass-by-value parameter in the Google style guide: 
   ```
   If a function guarantees that it will not modify an argument passed by 
reference or by pointer, the corresponding function parameter should be a 
reference-to-const (const T&) or pointer-to-const (const T*), respectively.
   ```
   In fact, adding unnecessary const declaration will put restriction on the 
caller of the library function.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-21 Thread GitBox
apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197280278
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -127,7 +137,7 @@ inline void SoftmaxGrad(Stream *s, DType *out, DType 
*ograd,
 #ifdef __CUDACC__
 template
 __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int 
axis,
-   Shape sshape, Shape stride) 
{
+   Shape sshape, Shape stride, 
float temperature) {
 
 Review comment:
   This is pass-by-value so it does not make any difference.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator

2018-06-21 Thread GitBox
apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add 
temperature parameter in Softmax and SoftmaxOutput operator
URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197279433
 
 

 ##
 File path: src/operator/nn/softmax-inl.h
 ##
 @@ -145,23 +155,36 @@ __global__ void softmax_compute_kernel(DType *in, DType 
*out, index_t M, int axi
   __syncthreads();
 
   red::sum::SetInitValue(smem[x]);
-  for (index_t i = x; i < M; i += x_size) {
-red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+  if (temperature == 1.0) {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf(in[base + i*sa] - 
smax)));
+}
+  } else {
+for (index_t i = x; i < M; i += x_size) {
+  red::sum::Reduce(smem[x], static_cast(expf((in[base + i*sa] - 
smax)/temperature)));
+}
   }
+
   __syncthreads();
   cuda::Reduce1D(smem);
   __syncthreads();
   DType ssum = smem[0];
   __syncthreads();
 
-  for (index_t i = x; i < M; i += x_size) {
-out[base + i*sa] = OP::Map(in[base + i*sa] - smax, ssum);
+  if (temperature == 1.0) {
 
 Review comment:
   I added this if condition for performance concern. I'd assume (correct me if 
I'm wrong) 90% of cases the temperature to softmax is set to 1. Adding a 
dividep-by-1.0 operation to expf((in[base + i*sa] - smax)/t) will slow down 
this computation and I am not aware any compiler can optimize away this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services