apeforest opened a new pull request #14570: [WIP] use a compile flag to use 
int64 tensor size
URL: https://github.com/apache/incubator-mxnet/pull/14570
 
 
   ## Description ##
   This PR is to fix the performance degradation reported in 
https://github.com/apache/incubator-mxnet/issues/14496.
   
   The performance degradation is introduced by PR 
https://github.com/apache/incubator-mxnet/pull/11742. I have verified the 
performance degradation in one of the operators transpose using a script below:
   
   ```
   import mxnet as mx
   import time
   import numpy as np
   
   sizes = [10, 50, 100,200,500]
   iters = [10000,1000,500,200,20]
   times = []
   for size in range(len(sizes)):
       data = []
       s = sizes[size]
       print(s)
       for i in range(iters[size]):
           x = mx.nd.ones((s,s,s))
           mx.nd.waitall()
           start = time.time()
           y = mx.nd.transpose(x,(2,0,1))
           mx.nd.waitall()
           data.append((time.time() - start)*1000)
           #print(data[-1])                                                     
                                                                                
                                       
       times.append(data)
   
   print('mxnet version: %s' % mx.__version__)
   for s in range(len(sizes)):
       print('--------------------')
       print('size: %s' % str(sizes[s]))
       print('p50: %4.2f ms' % np.percentile(times[s],50))
       print('p90: %4.2f ms' % np.percentile(times[s],90))
       print('p99: %4.2f ms' % np.percentile(times[s],99))
   ```
   
   By changing the `index_t` type from `int64_t` to `int32_t` can consistently 
change the 50-percentile runtime of tranpose of size 500 from 5000ms to 2400ms. 
   
   By changing the data type in the operator 
(https://github.com/dmlc/mshadow/blob/master/mshadow/extension/transpose.h#L70) 
alone, can also reduce the 50-percentile runtime of size 500 to 2600ms.
   
   I thereforce come to the conclusion that the performance degradation is 
caused by the runtime of  integer arithmetic operations between 32-bit integers 
and 64-bit integers.
   
   To further experiment with the arithmetic operations alone, I tested using a 
small program 
[here](https://github.com/apeforest/doraemon/blob/master/perf32vs64.cc). The 
runtime results are shown below:
   
   ```
   result = 49995000
   Add 32-bit time in ms 1359
   result = 49995000
   Add 64-bit time in ms 1971
   result = 349965000
   Add Mul 32-bit time in ms 1196
   result = 349965000
   Add Mul 64-bit time in ms 3477
   result = 7137858
   Add Div 32-bit time in ms 2878
   result = 7137858
   Add Div 64-bit time in ms 8499
   ```
   
   There are a few solutions to this problem:
   (1) Add a compilation flag to choose data types for tensor size (This PR)
   (2) Add an environment variable to choose data type for tensor size at 
runtime
   (3) Choose data type for tensor size at runtime based on the size of the 
tensor
   
   Given the expression template used in mshadow for operators, either (2) or 
(3) requries a significant change in the mshadow library. (1) can be used as a 
quick solution to fix performance degradation reported in several issues: 
https://github.com/apache/incubator-mxnet/issues/14496, 
https://github.com/apache/incubator-mxnet/issues/13928, 
https://github.com/apache/incubator-mxnet/issues/14563, 
https://github.com/apache/incubator-mxnet/issues/14569
   
   Any other suggestion is appreciated.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services

Reply via email to