[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r379217519
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -117,9 +117,13 @@ def prepare_op_inputs(op, arg_params):
 
 # 3d tensor is needed by following ops
 ops_3d = ['CTCLoss', 'ctc_loss']
-
+
 
 Review comment:
   Actually you need to rebase anyway so fix it in this PR 😛 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-02-13 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r379217321
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -117,9 +117,13 @@ def prepare_op_inputs(op, arg_params):
 
 # 3d tensor is needed by following ops
 ops_3d = ['CTCLoss', 'ctc_loss']
-
+
 
 Review comment:
   nit: fix indent
   if its the only change needed don't do it in this PR to prevent CI reruns 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-02-06 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r376044555
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -122,12 +122,21 @@ def prepare_op_inputs(op, arg_params):
 # 3d tensor is needed by following ops
 ops_3d = ['CTCLoss', 'ctc_loss']
 
+custom_data = ['SpatialTransformer', 'col2im', 'RNN', 'GroupNorm', 
'Dropout', 'FullyConnected', 'SoftmaxOutput', 'LinearRegressionOutput', 
'BatchNorm',
+'LogisticRegressionOutput', 'MAERegressionOutput', 
'SVMOutput', 'L2Normalization',
+'LayerNorm', 'InstanceNorm', 'Embedding', 
'Correlation', 'im2col', 'LRN']
+
+# Args for NN basic ops which vary across different ops
+ops_nn_variable_args = ['weight', 'label', 'mode', 'gamma', 'beta', 
'kernel', 'stride', 'dilate', 'pad', 'p', 'axis', 'loc', 
"moving_mean","moving_var"]
 
 Review comment:
   why isn't this variable `ops_nn_variable_args` used anywhere else?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373267919
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -253,6 +271,27 @@ def get_all_reduction_operators():
 reduction_mx_operators[op_name] = mx_operators[op_name]
 return reduction_mx_operators
 
+def get_all_nn_basic_operators():
 
 Review comment:
   1. nn_basic ops as a category was already introduced by Sandeep in his 
initial opera PR
   2. It made sense to keep adding NN layer related ops to this category.
   It can be better phrased as nn_layer_ops if that makes sense...


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373266395
 
 

 ##
 File path: benchmark/opperf/nd_operations/nn_basic_operators.py
 ##
 @@ -16,71 +16,59 @@
 # under the License.
 
 import mxnet as mx
-from benchmark.opperf.utils.benchmark_utils import run_performance_test
-from benchmark.opperf.utils.common_utils import merge_map_list
-from benchmark.opperf.rules.default_params import MX_OP_MODULE
+
+from benchmark.opperf.utils.op_registry_utils import get_all_nn_basic_operators
+from benchmark.opperf.utils.benchmark_utils import run_op_benchmarks
 
 """Performance benchmark tests for MXNet NDArray basic NN Operators.
 
 1. FullyConnected
 2. Dropout
 3. BatchNorm
+4. SoftmaxOutput
+5. LinearRegressionOutput
+6. LogisticRegressionOutput
+7. MAERegressionOutput
+8. SVMOutput
+9. L2Normalization
+10. LayerNorm
+11. InstanceNorm
+12. Embedding
+13. Correlation
+14. SpatialTransformer
+15. im2col
+16. col2im
+17. GroupNorm
+18. RNN
+19. LRN
 
 """
 
 
 def run_nn_basic_operators_benchmarks(ctx=mx.cpu(), dtype='float32', 
profiler='native', warmup=25, runs=100):
-# FullyConnnected operator benchmarks
-fc_benchmark_res = run_performance_test([getattr(MX_OP_MODULE, 
"FullyConnected")],
-run_backward=True,
-dtype=dtype,
-ctx=ctx,
-profiler=profiler,
-inputs=[{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 3 * 256 * 
256),
- "bias": (64,),
- "flatten": True},
-{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 256),
- "bias": (64,),
- "flatten": False}],
-warmup=warmup,
-runs=runs)
+"""Runs benchmarks with the given context and precision (dtype)for all the 
NN basic
+operators in MXNet.
+
+Parameters
+--
 
 Review comment:
   Good catch.
   I handled the rest of the places where profiler is used as a param but I 
forgot to add in function description.
   I have added them in this PR 
https://github.com/apache/incubator-mxnet/pull/17494/files
   
   @connorgoggins you'd still have to add it to nn_basic_ops and 
nn_activation_ops (missed that :p)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373266395
 
 

 ##
 File path: benchmark/opperf/nd_operations/nn_basic_operators.py
 ##
 @@ -16,71 +16,59 @@
 # under the License.
 
 import mxnet as mx
-from benchmark.opperf.utils.benchmark_utils import run_performance_test
-from benchmark.opperf.utils.common_utils import merge_map_list
-from benchmark.opperf.rules.default_params import MX_OP_MODULE
+
+from benchmark.opperf.utils.op_registry_utils import get_all_nn_basic_operators
+from benchmark.opperf.utils.benchmark_utils import run_op_benchmarks
 
 """Performance benchmark tests for MXNet NDArray basic NN Operators.
 
 1. FullyConnected
 2. Dropout
 3. BatchNorm
+4. SoftmaxOutput
+5. LinearRegressionOutput
+6. LogisticRegressionOutput
+7. MAERegressionOutput
+8. SVMOutput
+9. L2Normalization
+10. LayerNorm
+11. InstanceNorm
+12. Embedding
+13. Correlation
+14. SpatialTransformer
+15. im2col
+16. col2im
+17. GroupNorm
+18. RNN
+19. LRN
 
 """
 
 
 def run_nn_basic_operators_benchmarks(ctx=mx.cpu(), dtype='float32', 
profiler='native', warmup=25, runs=100):
-# FullyConnnected operator benchmarks
-fc_benchmark_res = run_performance_test([getattr(MX_OP_MODULE, 
"FullyConnected")],
-run_backward=True,
-dtype=dtype,
-ctx=ctx,
-profiler=profiler,
-inputs=[{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 3 * 256 * 
256),
- "bias": (64,),
- "flatten": True},
-{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 256),
- "bias": (64,),
- "flatten": False}],
-warmup=warmup,
-runs=runs)
+"""Runs benchmarks with the given context and precision (dtype)for all the 
NN basic
+operators in MXNet.
+
+Parameters
+--
 
 Review comment:
   Good catch.
   I handled the rest of the places where profiler is used as a param but I 
forgot to add in function description.
   I have added them in this PR 
https://github.com/apache/incubator-mxnet/pull/17494/files
   
   @connorgoggins you'd still have to add it to nn_basic_ops
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373260622
 
 

 ##
 File path: CONTRIBUTORS.md
 ##
 @@ -249,6 +249,7 @@ List of Contributors
 * [Guanxin Qiao](https://github.com/guanxinq)
 * [dithyrambe](https://github.com/dithyrambe)
 * [Piljae Chae](https://github.com/IHateMint)
+* [Connor Goggins](https://github.com/connorgoggins)
 
 Review comment:
   YAY! 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372768045
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -148,6 +148,11 @@ def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, 
runs):
 for op, op_params in ops.items():
 # Prepare inputs for the operator
 inputs = prepare_op_inputs(op, op_params)
+
+# Testing Embedding op backwards is currently unsupported
+if op == "Embedding":
 
 Review comment:
   Yes please do.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372721286
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -148,6 +148,11 @@ def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, 
runs):
 for op, op_params in ops.items():
 # Prepare inputs for the operator
 inputs = prepare_op_inputs(op, op_params)
+
+# Testing Embedding op backwards is currently unsupported
+if op == "Embedding":
 
 Review comment:
   Also just adding in `run_op_benchmarks` doesn't seal all ends
   There are couple of ways to run OpPerf
   For eg what happens when `run_performance_test` is used?
   Does he get an error? or will it seg_fault?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372721286
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -148,6 +148,11 @@ def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, 
runs):
 for op, op_params in ops.items():
 # Prepare inputs for the operator
 inputs = prepare_op_inputs(op, op_params)
+
+# Testing Embedding op backwards is currently unsupported
+if op == "Embedding":
 
 Review comment:
   Also just adding in `run_op_benchmarks` doesn't seal all ends
   There are couple of ways to run OpPerf
   For eg what happens when run_performance_test is used?
   Does he get an error? or will it seg_fault?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372716418
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -148,6 +148,11 @@ def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, 
runs):
 for op, op_params in ops.items():
 # Prepare inputs for the operator
 inputs = prepare_op_inputs(op, op_params)
+
+# Testing Embedding op backwards is currently unsupported
 
 Review comment:
   Is there an issue tracking this? If yes can this be included here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372717484
 
 

 ##
 File path: benchmark/opperf/rules/default_params.py
 ##
 @@ -164,4 +312,5 @@
   "mu", "sigma", "lam", "alpha", "beta", "gamma", "k", 
"p",
   "low", "high", "weight", "bias", "moving_mean", 
"moving_var",
   "weight", "weight32", "grad", "mean", "var", "mom", 
"n", "d",
-  "v", "z", "g", "delta", "args"]
+  "v", "z", "g", "delta", "args", "label", "data1", 
"data2",
+  "parameters", "state", "loc"]
 
 Review comment:
   why aren't the default_data_* variables you created for separate inputs 
added to this list? Does it work without creating NDArray? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-29 Thread GitBox
ChaiBapchya commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r372716418
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -148,6 +148,11 @@ def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, 
runs):
 for op, op_params in ops.items():
 # Prepare inputs for the operator
 inputs = prepare_op_inputs(op, op_params)
+
+# Testing Embedding op backwards is currently unsupported
 
 Review comment:
   Is there an issue tracking this? If yes can this be implemented?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services