apeforest commented on a change in pull request #11356: [MXNET-560][WIP] Add temperature parameter in Softmax and SoftmaxOutput operator URL: https://github.com/apache/incubator-mxnet/pull/11356#discussion_r197279433
########## File path: src/operator/nn/softmax-inl.h ########## @@ -145,23 +155,36 @@ __global__ void softmax_compute_kernel(DType *in, DType *out, index_t M, int axi __syncthreads(); red::sum::SetInitValue(smem[x]); - for (index_t i = x; i < M; i += x_size) { - red::sum::Reduce(smem[x], static_cast<DType>(expf(in[base + i*sa] - smax))); + if (temperature == 1.0) { + for (index_t i = x; i < M; i += x_size) { + red::sum::Reduce(smem[x], static_cast<DType>(expf(in[base + i*sa] - smax))); + } + } else { + for (index_t i = x; i < M; i += x_size) { + red::sum::Reduce(smem[x], static_cast<DType>(expf((in[base + i*sa] - smax)/temperature))); + } } + __syncthreads(); cuda::Reduce1D<red::sum, x_bits>(smem); __syncthreads(); DType ssum = smem[0]; __syncthreads(); - for (index_t i = x; i < M; i += x_size) { - out[base + i*sa] = OP::Map(in[base + i*sa] - smax, ssum); + if (temperature == 1.0) { Review comment: I added this if condition for performance concern. I'd assume (correct me if I'm wrong) 90% of cases the temperature to softmax is set to 1. Adding a dividep-by-1.0 operation to expf((in[base + i*sa] - smax)/t) will slow down this computation and I am not aware any compiler can optimize away this. ---------------------------------------------------------------- This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services