[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
apeforest commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373259390
 
 

 ##
 File path: benchmark/opperf/nd_operations/nn_basic_operators.py
 ##
 @@ -16,71 +16,59 @@
 # under the License.
 
 import mxnet as mx
-from benchmark.opperf.utils.benchmark_utils import run_performance_test
-from benchmark.opperf.utils.common_utils import merge_map_list
-from benchmark.opperf.rules.default_params import MX_OP_MODULE
+
+from benchmark.opperf.utils.op_registry_utils import get_all_nn_basic_operators
+from benchmark.opperf.utils.benchmark_utils import run_op_benchmarks
 
 """Performance benchmark tests for MXNet NDArray basic NN Operators.
 
 1. FullyConnected
 2. Dropout
 3. BatchNorm
+4. SoftmaxOutput
+5. LinearRegressionOutput
+6. LogisticRegressionOutput
+7. MAERegressionOutput
+8. SVMOutput
+9. L2Normalization
+10. LayerNorm
+11. InstanceNorm
+12. Embedding
+13. Correlation
+14. SpatialTransformer
+15. im2col
+16. col2im
+17. GroupNorm
+18. RNN
+19. LRN
 
 """
 
 
 def run_nn_basic_operators_benchmarks(ctx=mx.cpu(), dtype='float32', 
profiler='native', warmup=25, runs=100):
-# FullyConnnected operator benchmarks
-fc_benchmark_res = run_performance_test([getattr(MX_OP_MODULE, 
"FullyConnected")],
-run_backward=True,
-dtype=dtype,
-ctx=ctx,
-profiler=profiler,
-inputs=[{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 3 * 256 * 
256),
- "bias": (64,),
- "flatten": True},
-{"data": (32, 3, 256, 256),
- "num_hidden": 64,
- "weight": (64, 256),
- "bias": (64,),
- "flatten": False}],
-warmup=warmup,
-runs=runs)
+"""Runs benchmarks with the given context and precision (dtype)for all the 
NN basic
+operators in MXNet.
+
+Parameters
+--
 
 Review comment:
   missing `profiler` here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
apeforest commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373260284
 
 

 ##
 File path: benchmark/opperf/utils/benchmark_utils.py
 ##
 @@ -143,19 +145,35 @@ def run_performance_test(ops, inputs, run_backward=True,
 
 
 def run_op_benchmarks(ops, dtype, ctx, profiler, warmup, runs):
+# Running SoftmaxOutput backwards on GPU results in errors
+# track issue here: https://github.com/apache/incubator-mxnet/issues/880
+gpu_backwards_disabled_ops = ['SoftmaxOutput']
+
+# Running im2col either forwards or backwards on GPU results in errors
+# track issue here: https://github.com/apache/incubator-mxnet/issues/17493
+gpu_disabled_ops = ['im2col']
+
 # For each operator, run benchmarks
 mx_op_benchmark_results = []
 for op, op_params in ops.items():
-# Prepare inputs for the operator
-inputs = prepare_op_inputs(op, op_params)
-# Run benchmarks
-cur_op_res = run_performance_test(op_params["nd_op_handle"],
-  
run_backward=op_params["has_backward"],
-  dtype=dtype, ctx=ctx,
-  profiler=profiler,
-  inputs=inputs,
-  warmup=warmup, runs=runs)
-mx_op_benchmark_results += cur_op_res
+if not (ctx == mx.gpu() and op in gpu_disabled_ops):
 
 Review comment:
   Can we change the logic here to `ctx == mx.cpu() or op not in 
gpu_disabled_ops` for readability?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17456: Implement remaining nn_basic ops in opperf

2020-01-30 Thread GitBox
apeforest commented on a change in pull request #17456: Implement remaining 
nn_basic ops in opperf
URL: https://github.com/apache/incubator-mxnet/pull/17456#discussion_r373260722
 
 

 ##
 File path: benchmark/opperf/utils/op_registry_utils.py
 ##
 @@ -253,6 +271,27 @@ def get_all_reduction_operators():
 reduction_mx_operators[op_name] = mx_operators[op_name]
 return reduction_mx_operators
 
+def get_all_nn_basic_operators():
 
 Review comment:
   Could you be more specific what "nn basic operators"include?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services