[GitHub] zhreshold commented on issue #11247: Add seed_aug parameter for ImageRecordItr to fix random seed for default augmentation

2018-06-19 Thread GitBox
zhreshold commented on issue #11247: Add seed_aug parameter for ImageRecordItr 
to fix random seed for default augmentation
URL: https://github.com/apache/incubator-mxnet/pull/11247#issuecomment-398631082
 
 
   see if rebase fix the flaky test


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10951: [MXNET-545] Fix broken cython build

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #10951: [MXNET-545] Fix broken 
cython build
URL: https://github.com/apache/incubator-mxnet/pull/10951#discussion_r196652867
 
 

 ##
 File path: ci/docker/install/ubuntu_python.sh
 ##
 @@ -29,5 +29,5 @@ wget -nv https://bootstrap.pypa.io/get-pip.py
 python3 get-pip.py
 python2 get-pip.py
 
-pip2 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1
-pip3 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1
+pip2 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 Cython
+pip3 install nose cpplint==1.3.0 pylint==1.8.3 'numpy<1.15.0,>=1.8.2' 
nose-timer 'requests<2.19.0,>=2.18.4' h5py==2.8.0rc1 scipy==1.0.1 Cython
 
 Review comment:
   Could we pin Cython, please?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 
new pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r196652237
 
 

 ##
 File path: tests/nightly/Jenkinsfile
 ##
 @@ -0,0 +1,185 @@
+// -*- mode: groovy -*-
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+//This is a Jenkinsfile for nightly tests. The format and some functions have 
been picked up from the top-level Jenkinsfile
+
+err = null
+mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/nnvm/lib/libnnvm.a'
+
+// pack libraries for later use
+def pack_lib(name, libs=mx_lib) {
+  sh """
+echo "Packing ${libs} into ${name}"
+echo ${libs} | sed -e 's/,/ /g' | xargs md5sum
+"""
+  stash includes: libs, name: name
+}
+
+// unpack libraries saved before
+def unpack_lib(name, libs=mx_lib) {
+  unstash name
+  sh """
+echo "Unpacked ${libs} from ${name}"
+echo ${libs} | sed -e 's/,/ /g' | xargs md5sum
+"""
+}
+
+def init_git() {
+  deleteDir()
+  retry(5) {
+try {
+  timeout(time: 15, unit: 'MINUTES') {
+checkout scm
+sh 'git submodule update --init --recursive'
+sh 'git clean -d -f'
+  }
+} catch (exc) {
+  deleteDir()
+  error "Failed to fetch source codes with ${exc}"
+  sleep 2
+}
+  }
+}
+
+def docker_run(platform, function_name, use_nvidia, shared_mem = '500m') {
+  def command = "ci/build.py --docker-registry ${env.DOCKER_CACHE_REGISTRY} 
%USE_NVIDIA% --platform %PLATFORM% --shm-size %SHARED_MEM% 
/work/runtime_functions.sh %FUNCTION_NAME%"
+  command = command.replaceAll('%USE_NVIDIA%', use_nvidia ? '--nvidiadocker' : 
'')
+  command = command.replaceAll('%PLATFORM%', platform)
+  command = command.replaceAll('%FUNCTION_NAME%', function_name)
+  command = command.replaceAll('%SHARED_MEM%', shared_mem)
+
+  sh command
+}
+
+try {
+  stage('NightlyTests'){
+parallel 'RATCheck: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/nt-RATTest') {
+  init_git()
+  docker_run('ubuntu_nightly_cpu', 'nightly_test_rat_check', false)
+}
+  }
+},
+'CompilationWarnings: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/nt-compilationTest') {
+  init_git()
+  docker_run('ubuntu_nightly_cpu', 'nightly_test_compilation_warning', 
false)
+}
+  }
+},
+'InstallationGuide: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/nt-Installation-cpu') {
+  init_git()
+  docker_run('ubuntu_base_cpu', 'nightly_test_installation 
ubuntu_python_cpu_virtualenv', false)
+  docker_run('ubuntu_base_cpu', 'nightly_test_installation 
ubuntu_python_cpu_pip', false)
+  //Docker Install Test is currently disabled and tracked here: 
https://github.com/apache/incubator-mxnet/issues/11288
+  //docker_run('ubuntu_base_cpu', 'nightly_test_installation 
ubuntu_python_cpu_docker', false)
+  docker_run('ubuntu_base_cpu', 'nightly_test_installation 
ubuntu_python_cpu_source', false)
+}
+  }
+},
+'InstallationGuide: GPU': {
+  node('mxnetlinux-gpu') {
+ws('workspace/nt-Installation-gpu') {
+  init_git()
+  docker_run('ubuntu_base_gpu', 'nightly_test_installation 
ubuntu_python_gpu_virtualenv', true)
+  docker_run('ubuntu_base_gpu', 'nightly_test_installation 
ubuntu_python_gpu_pip', true)
+  //Docker Install Test is currently disabled and tracked here: 
https://github.com/apache/incubator-mxnet/issues/11288
+  //docker_run('ubuntu_base_gpu', 'nightly_test_installation 
ubuntu_python_gpu_docker', true)
+  docker_run('ubuntu_base_gpu', 'nightly_test_installation 
ubuntu_python_gpu_source', true)
+}
+  }
+},
+'PipTest: GPU': {
+  node('mxnetlinux-gpu') {
+ws('workspace/nt-pipTest') {
+  init_git()
+}
+  }
+},
+'Amalgamation-atlas: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/nt-amalgamation1') {
+  init_git()
+  docker_run('ubuntu_nightly_cpu', 'nightly_test_amalgamation 
USE_BLAS=atlas', false)
+}
+ 

[GitHub] marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 
new pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r196651974
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_nightly_gpu
 ##
 @@ -0,0 +1,64 @@
+# -*- mode: dockerfile -*-
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+# Dockerfile to run MXNet on Ubuntu 16.04 for CPU
 
 Review comment:
   Nit: CPU -> GPU


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 
new pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r196651940
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_base_cpu
 ##
 @@ -0,0 +1,39 @@
+# -*- mode: dockerfile -*-
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+# Dockerfile to run the MXNet Installation Tests on Ubuntu 16.04
+# This should run in an empty docker with ubuntu and cuda.
 
 Review comment:
   Nit: No cuda in this container


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Aman1994 commented on issue #11303: MXNET Scala package build failed

2018-06-19 Thread GitBox
Aman1994 commented on issue #11303: MXNET Scala package build failed
URL: 
https://github.com/apache/incubator-mxnet/issues/11303#issuecomment-398623217
 
 
   @lanking520 I tried java --version. I guess my java version is of 64 bit.
   openjdk version "1.8.0_171"
   OpenJDK Runtime Environment (build 1.8.0_171-8u171-b11-0ubuntu0.18.04.1-b11)
   OpenJDK 64-Bit Server VM (build 25.171-b11, mixed mode)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fix axis Bug in MKLDNN Softmax (#11335)

2018-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 6307c00  Fix axis Bug in MKLDNN Softmax (#11335)
6307c00 is described below

commit 6307c00b1a9648e86d357d259d2068f53cc0a257
Author: Xinyu Chen 
AuthorDate: Wed Jun 20 12:09:03 2018 +0800

Fix axis Bug in MKLDNN Softmax (#11335)

* add softmax imporvement

* reuse CheckAxis code

* update comment

* add tests with negative axis
---
 src/operator/nn/mkldnn/mkldnn_softmax.cc | 5 -
 src/operator/nn/softmax.cc   | 4 +---
 tests/python/unittest/test_operator.py   | 2 +-
 3 files changed, 6 insertions(+), 5 deletions(-)

diff --git a/src/operator/nn/mkldnn/mkldnn_softmax.cc 
b/src/operator/nn/mkldnn/mkldnn_softmax.cc
index aa59f13..acfa358 100644
--- a/src/operator/nn/mkldnn/mkldnn_softmax.cc
+++ b/src/operator/nn/mkldnn/mkldnn_softmax.cc
@@ -26,6 +26,7 @@
 #include "../softmax-inl.h"
 #include "./mkldnn_ops-inl.h"
 #include "./mkldnn_base-inl.h"
+#include "../../tensor/broadcast_reduce_op.h"
 
 #if MXNET_USE_MKLDNN == 1
 namespace mxnet {
@@ -38,11 +39,13 @@ void MKLDNNSoftmaxForward(const nnvm::NodeAttrs& attrs, 
const OpContext ,
   auto input_mem = in_data.GetMKLDNNData();
   mkldnn::memory::primitive_desc data_mpd = input_mem->get_primitive_desc();
   mkldnn::memory::desc data_md = data_mpd.desc();
+  int axis = CheckAxis(param.axis, in_data.shape().ndim());
+
   auto cpu_engine = data_mpd.get_engine();
   auto prop = ctx.is_train
 ? mkldnn::prop_kind::forward_training : mkldnn::prop_kind::forward_scoring;
   mkldnn::softmax_forward::desc desc = mkldnn::softmax_forward::desc(prop,
-  data_md, param.axis);
+  data_md, axis);
   mkldnn::softmax_forward::primitive_desc pdesc(desc, cpu_engine);
 
   auto output_memory = out_data.GetMKLDNNData();
diff --git a/src/operator/nn/softmax.cc b/src/operator/nn/softmax.cc
index f8cc6fe..e9b104f 100644
--- a/src/operator/nn/softmax.cc
+++ b/src/operator/nn/softmax.cc
@@ -38,10 +38,8 @@ static void SoftmaxComputeExCPU(const nnvm::NodeAttrs& attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-  const SoftmaxParam& param = nnvm::get(attrs.parsed);
   // It seems MKLDNN softmax doesn't support training.
-  // and it only supports non-negative axis.
-  if (SupportMKLDNN(inputs[0]) && !ctx.is_train && param.axis >= 0) {
+  if (SupportMKLDNN(inputs[0]) && !ctx.is_train) {
 MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
 MKLDNNSoftmaxForward(attrs, ctx, inputs[0], req[0], outputs[0]);
 auto fn = SoftmaxCompute;
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index f287c19..6742669 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -4098,7 +4098,7 @@ def test_new_softmax():
 for ndim in range(1, 5):
 for _ in range(5):
 shape = np.random.randint(1, 5, size=ndim)
-axis = np.random.randint(0, ndim)
+axis = np.random.randint(-ndim, ndim)
 data = np.random.uniform(-2, 2, size=shape)
 sym = mx.sym.softmax(axis=axis)
 check_symbolic_forward(sym, [data], [np_softmax(data, axis=axis)])



[GitHub] szha closed pull request #11335: Fix axis Bug in MKLDNN Softmax

2018-06-19 Thread GitBox
szha closed pull request #11335: Fix axis Bug in MKLDNN Softmax
URL: https://github.com/apache/incubator-mxnet/pull/11335
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/nn/mkldnn/mkldnn_softmax.cc 
b/src/operator/nn/mkldnn/mkldnn_softmax.cc
index aa59f13d06d..acfa358a796 100644
--- a/src/operator/nn/mkldnn/mkldnn_softmax.cc
+++ b/src/operator/nn/mkldnn/mkldnn_softmax.cc
@@ -26,6 +26,7 @@
 #include "../softmax-inl.h"
 #include "./mkldnn_ops-inl.h"
 #include "./mkldnn_base-inl.h"
+#include "../../tensor/broadcast_reduce_op.h"
 
 #if MXNET_USE_MKLDNN == 1
 namespace mxnet {
@@ -38,11 +39,13 @@ void MKLDNNSoftmaxForward(const nnvm::NodeAttrs& attrs, 
const OpContext ,
   auto input_mem = in_data.GetMKLDNNData();
   mkldnn::memory::primitive_desc data_mpd = input_mem->get_primitive_desc();
   mkldnn::memory::desc data_md = data_mpd.desc();
+  int axis = CheckAxis(param.axis, in_data.shape().ndim());
+
   auto cpu_engine = data_mpd.get_engine();
   auto prop = ctx.is_train
 ? mkldnn::prop_kind::forward_training : mkldnn::prop_kind::forward_scoring;
   mkldnn::softmax_forward::desc desc = mkldnn::softmax_forward::desc(prop,
-  data_md, param.axis);
+  data_md, axis);
   mkldnn::softmax_forward::primitive_desc pdesc(desc, cpu_engine);
 
   auto output_memory = out_data.GetMKLDNNData();
diff --git a/src/operator/nn/softmax.cc b/src/operator/nn/softmax.cc
index f8cc6fee9a2..e9b104f1286 100644
--- a/src/operator/nn/softmax.cc
+++ b/src/operator/nn/softmax.cc
@@ -38,10 +38,8 @@ static void SoftmaxComputeExCPU(const nnvm::NodeAttrs& attrs,
 const std::vector& inputs,
 const std::vector& req,
 const std::vector& outputs) {
-  const SoftmaxParam& param = nnvm::get(attrs.parsed);
   // It seems MKLDNN softmax doesn't support training.
-  // and it only supports non-negative axis.
-  if (SupportMKLDNN(inputs[0]) && !ctx.is_train && param.axis >= 0) {
+  if (SupportMKLDNN(inputs[0]) && !ctx.is_train) {
 MKLDNN_OPCHECK_INIT(false, outputs.size(), inputs, outputs);
 MKLDNNSoftmaxForward(attrs, ctx, inputs[0], req[0], outputs[0]);
 auto fn = SoftmaxCompute;
diff --git a/tests/python/unittest/test_operator.py 
b/tests/python/unittest/test_operator.py
index f287c191963..67426693436 100644
--- a/tests/python/unittest/test_operator.py
+++ b/tests/python/unittest/test_operator.py
@@ -4098,7 +4098,7 @@ def test_new_softmax():
 for ndim in range(1, 5):
 for _ in range(5):
 shape = np.random.randint(1, 5, size=ndim)
-axis = np.random.randint(0, ndim)
+axis = np.random.randint(-ndim, ndim)
 data = np.random.uniform(-2, 2, size=shape)
 sym = mx.sym.softmax(axis=axis)
 check_symbolic_forward(sym, [data], [np_softmax(data, axis=axis)])


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #11335: Fix axis Bug in MKLDNN Softmax

2018-06-19 Thread GitBox
xinyu-intel commented on issue #11335: Fix axis Bug in MKLDNN Softmax
URL: https://github.com/apache/incubator-mxnet/pull/11335#issuecomment-398618061
 
 
   @szha unit tests with negative axis have been added and passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
reminisce commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196639178
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] yjcn closed issue #11242: Same model with c_predict_api gets an incorrect result but it is right in python.

2018-06-19 Thread GitBox
yjcn closed issue #11242: Same model with c_predict_api gets an incorrect 
result but it is right in python.
URL: https://github.com/apache/incubator-mxnet/issues/11242
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yjcn commented on issue #11242: Same model with c_predict_api gets an incorrect result but it is right in python.

2018-06-19 Thread GitBox
yjcn commented on issue #11242: Same model with c_predict_api gets an incorrect 
result but it is right in python.
URL: 
https://github.com/apache/incubator-mxnet/issues/11242#issuecomment-398611793
 
 
   I found the problem. The GetImageFile function is wrong
   The right function is
   ```
   void GetImageFile(const std::string image_file,
 mx_float* image_data, const int channels,
 const cv::Size resize_size, const mx_float* mean_data = 
nullptr) {
   // Read all kinds of file into a BGR color 3 channels image
   cv::Mat im_ori = cv::imread(image_file, cv::IMREAD_COLOR);
   
   if (im_ori.empty()) {
   std::cerr << "Can't open the image. Please check " << image_file << 
". \n";
   assert(false);
   }
   
   cv::Mat im;
   
   cv::resize(im_ori, im, resize_size);
   
   float mean_b, mean_g, mean_r;
   mean_b = 104.0;
   mean_g = 117.0;
   mean_r = 123.0;
   for(int i=0; i < im.rows; ++i){
 uchar* data = im.ptr(i);
 for(int j=0; j< im.cols; ++j){
   image_data[2*im.rows*im.cols+i*im.cols+j] = 
static_cast(*data++) - mean_b;
   image_data[im.rows*im.cols+i*im.cols+j] = 
static_cast(*data++) - mean_g;
   image_data[i*im.cols+j] = static_cast(*data++) - mean_r;
 }
   }
   }
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196637903
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] junrushao1994 opened a new issue #11343: Minor typo in ./src/operator/nn/lrn.cc

2018-06-19 Thread GitBox
junrushao1994 opened a new issue #11343: Minor typo in ./src/operator/nn/lrn.cc
URL: https://github.com/apache/incubator-mxnet/issues/11343
 
 
   ## Description
   
   Line 188 in 
[lrn.cc](https://github.com/apache/incubator-mxnet/blob/master/src/operator/nn/lrn.cc#L188),
 should be
   
   `.set_attr("FListOutputNames",`
   
   ## Build info
   
   MXNet commit hash: ccee17672b23fa864f5c2e67d6bcea5ccff2979e (current master)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
reminisce commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196637167
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] reminisce commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
reminisce commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196636975
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] azai91 commented on a change in pull request #11232: [MXNET-498] Test MKLDNN backward operators

2018-06-19 Thread GitBox
azai91 commented on a change in pull request #11232: [MXNET-498] Test MKLDNN 
backward operators 
URL: https://github.com/apache/incubator-mxnet/pull/11232#discussion_r196636904
 
 

 ##
 File path: tests/cpp/operator/mkldnn.cc
 ##
 @@ -93,40 +93,26 @@ TEST(MKLDNN_UTIL_FUNC, MemFormat) {
 
 // Init arrays with the default layout.
 static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
 
 Review comment:
   No reason to have two inits. We can just have one that covers negative and 
positives. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196636701
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
 
 Review comment:
   i don't think we need this extra step. @xinyu-intel 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196636701
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
 
 Review comment:
   i don't think we need this extra step.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196636631
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
 
 Review comment:
   it should work for all versions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196636502
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
 
 Review comment:
   MKL is a faster implementation of openblas. one can replace the other.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11232: [MXNET-498] Test MKLDNN backward operators

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11232: [MXNET-498] Test MKLDNN 
backward operators 
URL: https://github.com/apache/incubator-mxnet/pull/11232#discussion_r196632389
 
 

 ##
 File path: tests/cpp/operator/mkldnn.cc
 ##
 @@ -93,40 +93,26 @@ TEST(MKLDNN_UTIL_FUNC, MemFormat) {
 
 // Init arrays with the default layout.
 static void InitDefaultArray(NDArray *arr, bool is_rand = false) {
 
 Review comment:
   why do you decide to remove this Init function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #11232: [MXNET-498] Test MKLDNN backward operators

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11232: [MXNET-498] Test MKLDNN 
backward operators 
URL: https://github.com/apache/incubator-mxnet/pull/11232#discussion_r196636207
 
 

 ##
 File path: tests/cpp/operator/mkldnn.cc
 ##
 @@ -650,172 +703,135 @@ TEST(MKLDNN_NDArray, CopyFrom) {
   MKLDNNStream::Get()->Submit();
   std::vector inputs(1);
   inputs[0] = _arr.arr;
-  VerifyCopyResult(inputs, out_arr.arr);
+  VerifyCopyResult(inputs, {_arr.arr});
 }
   }
 }
 
-void TestUnaryOp(const OpAttrs , InitFunc init_fn, VerifyFunc verify_fn) 
{
-  std::vector inputs(1);
-  std::vector outputs(1);
-  std::vector req(1);
-  std::vector dispatches = attrs.dispatches;
-
+TEST(MKLDNN_BASE, MKLDNNSum) {
 
 Review comment:
   could you move this test back to the end of the file? It's kind of hard for 
me to tell what you changed in the test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632936
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
 
 Review comment:
   https://github.com/apache/incubator-mxnet/blob/master/Makefile


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632964
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
 
 Review comment:
   https://github.com/apache/incubator-mxnet/blob/master/prepare_mkldnn.sh


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632864
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the 
submodules:
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+```
+
+2. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+3. Start a Visual Studio command prompt.
+
+4. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G 

[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632833
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
 
 Review comment:
   del


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632758
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the 
submodules:
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+```
+
+2. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+3. Start a Visual Studio command prompt.
+
+4. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G 

[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632577
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the 
submodules:
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+```
+
+2. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+3. Start a Visual Studio command prompt.
+
+4. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G 

[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632571
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
 
 Review comment:
   added


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196632249
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the 
submodules:
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+```
+
+2. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+3. Start a Visual Studio command prompt.
+
+4. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G 

[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196631836
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
If you want to use MKL blas, you should set ```-DUSE_BLAS=mkl``` when cmake. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBLAS](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/apache/incubator-mxnet). Don't forget to pull the 
submodules:
 
 Review comment:
   keep consistent with 
https://mxnet.incubator.apache.org/install/windows_setup.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache 

[GitHub] zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196630557
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner and subgraph op

2018-06-19 Thread GitBox
zheng-da commented on a change in pull request #11251: [WIP] Graph partitioner 
and subgraph op
URL: https://github.com/apache/incubator-mxnet/pull/11251#discussion_r196630254
 
 

 ##
 File path: src/operator/subgraph/partition_graph.cc
 ##
 @@ -0,0 +1,688 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+
+/*!
+ *  Copyright (c) 2018 by Contributors
+ * \file partition_graph.cc
+ * \brief
+ */
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+
+#include "./default_subgraph_op.h"
+#include "./common.h"
+
+namespace nnvm {
+NodePtr CreateVariableNode(const std::string& name);
+}
+
+namespace mxnet {
+
+namespace op {
+
+using nnvm::Symbol;
+using nnvm::Node;
+using nnvm::NodePtr;
+using nnvm::NodeEntry;
+using nnvm::Graph;
+
+// TODO(junwu): Change this to 0
+#define SUBGRAPH_DEBUG 1
+
+namespace sg {  // sg stands for subgraph
+
+#if SUBGRAPH_DEBUG
+void PrintSubgraph(const std::vector& simple_nodes) {
+  std::string op_names = "";
+  for (size_t i = 0; i < simple_nodes.size(); ++i) {
+op_names += simple_nodes[i]->node->attrs.name + ' ';
+  }
+  LOG(INFO) << "Subgraph node names: " << op_names;
+}
+
+void PrintNodeEntry(const nnvm::NodeEntry& entry) {
+  std::string ret = "NodeEntry: node_name=" + entry.node->attrs.name
++ ", index=" + std::to_string(entry.index) + ", version=" + 
std::to_string(entry.version);
+  LOG(INFO) << ret;
+}
+
+void PrintNodeEntries(const std::vector& entries) {
+  for (size_t i = 0; i < entries.size(); ++i) {
+PrintNodeEntry(*entries[i]);
+  }
+}
+#endif
+
+/*!
+ * \brief Given a MXNet computational graph, create an undirected graph from 
it.
+ * \param g the MXNet computational graph
+ * \param simple_nodes the nodes of undirected graph in top sorted order
+ */
+void CreateSimpleGraph(const Graph& g,
+   std::vector* simple_nodes) {
+  const auto& indexed_graph = g.indexed_graph();
+  simple_nodes->reserve(indexed_graph.num_nodes());
+  DFSVisit(g.outputs, [&](const NodePtr& node) {
+SimpleNodePtr sn = SimpleNode::Create();
+sn->node = node.get();
+for (size_t i = 0; i < sn->node->inputs.size(); ++i) {
+  const auto& e = sn->node->inputs[i];
+  const auto input_nid = indexed_graph.node_id(e.node.get());
+  CHECK_LT(input_nid, simple_nodes->size());
+  auto& input_node_outputs = (*simple_nodes)[input_nid]->outputs;
+  auto it = input_node_outputs.find(sn->node);
+  if (it == input_node_outputs.end()) {
+input_node_outputs.emplace(sn->node, std::vector{i});
+  } else {
+it->second.push_back(i);
+  }
+}
+simple_nodes->emplace_back(std::move(sn));
+  });
+}
+
+/*!
+ * \brief Reset labels of the subgraph nodes to the original state
+ * and clear the vector of subgraph nodes.
+ */
+void ResetNodeLabels(const nnvm::Graph& g,
+ const std::vector& simple_nodes,
+ std::vector* subgraph_nodes) {
+  for (auto n : *subgraph_nodes) {
+const auto nid = g.indexed_graph().node_id(n);
+simple_nodes[nid]->label = -1;
+  }
+  subgraph_nodes->clear();
+}
+
+/*!
+ * \brief This function traverses the nodes in a computation graph from a 
starting
+ * node following the input edges and output edges, and marks all nodes that
+ * can be accessed from the starting node. Before the function returns,
+ * it will conduct checking whether there is a loop between the potential 
subgraph
+ * and the outside nodes. If so, add the node that should break the loop
+ * in excluded_nodes and return false. Otherwise, return true.
+ * \param g the whole graph
+ * \subgraph_selector determines whether the visited node should be choosen or 
not
+ * \label the label of the current subgraph
+ * \snid node id of the seed simple node
+ * \simple_nodes all simple nodes in the top sorted order
+ * \subgraph_nodes all the nodes belonging to the same subgraph of seed node
+ * \excluded_nodes set of nodes that should be excluded from the current 
subgraph
+ */
+bool LabelSubgraph(const Graph& g,
+   SubgraphSelectorPtr subgraph_selector,
+   const int label,
+   const size_t snid,  // simple node id, this is a seed
+   

[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196630931
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
 
 Review comment:
   keep consistent with 
https://mxnet.incubator.apache.org/install/windows_setup.html
   Feel free to check if v2 work for your self:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196630871
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
+
+MacOS
+
+### Prerequisites
+
+Install the dependencies, required for MXNet, with the following commands:
+
+- [Homebrew](https://brew.sh/)
+- gcc (clang in macOS does not support OpenMP)
+- OpenCV (for computer vision operations)
+
+```
+# Paste this command in Mac terminal to install Homebrew
+/usr/bin/ruby -e "$(curl -fsSL 
https://raw.githubusercontent.com/Homebrew/install/master/install)"
+
+# install dependency
+brew update
+brew install pkg-config
+brew install graphviz
+brew tap homebrew/core
+brew install opencv
+brew tap homebrew/versions
+brew install gcc49
+brew link gcc49
+```
+
+### Enable OpenMP for MacOS
+
+If you want to enable OpenMP for better performance, you should modify these 
two files:
+
+1. Makefile L138:
+
+```
+ifeq ($(USE_OPENMP), 1)
+# ifneq ($(UNAME_S), Darwin)
+CFLAGS += -fopenmp
+# endif
+endif
+```
+
+2. prepare_mkldnn.sh L96:
+
+```
+CC=gcc-4.9 CXX=g++-4.9 cmake $MKLDNN_ROOTDIR 
-DCMAKE_INSTALL_PREFIX=$MKLDNN_INSTALLDIR -B$MKLDNN_BUILDDIR 
-DARCH_OPT_FLAGS="-mtune=generic" -DWITH_TEST=OFF -DWITH_EXAMPLE=OFF >&2
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(sysctl -n hw.ncpu) CC=gcc-4.9 CXX=g++-4.9 USE_OPENCV=0 USE_OPENMP=1 
USE_MKLDNN=1 USE_BLAS=apple USE_PROFILER=1
+```
+
+*Note: Temporarily disable OPENCV.*
+
+Windows
+
+We recommend to build and install MXNet yourself using [Microsoft Visual 
Studio 2015](https://www.visualstudio.com/vs/older-downloads/), or you can also 
try experimentally the latest [Microsoft Visual Studio 
2017](https://www.visualstudio.com/downloads/).
+
+**Visual Studio 2015**
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
 
 Review comment:
   keep consistent with 
https://mxnet.incubator.apache.org/install/windows_setup.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196630743
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
 
 Review comment:
   keep consistent with 
https://mxnet.incubator.apache.org/install/windows_setup.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196630760
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
+```
+
+### Clone MXNet sources
+
+```
+git clone --recursive https://github.com/apache/incubator-mxnet.git
+cd incubator-mxnet
+git submodule update --recursive --init
+```
+
+### Build MXNet with MKL-DNN
+
+```
+make -j $(nproc) USE_OPENCV=1 USE_MKLDNN=1 USE_BLAS=mkl 
USE_INTEL_PATH=/opt/intel
+```
+
+If you don't have full MKL library installed, you can use OpenBLAS by setting 
`USE_BLAS=openblas`.
 
 Review comment:
   keep consistent with 
https://mxnet.incubator.apache.org/install/windows_setup.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #11049: Add linux and macos MKLDNN Building Instruction

2018-06-19 Thread GitBox
xinyu-intel commented on a change in pull request #11049: Add linux and macos 
MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/11049#discussion_r196630743
 
 

 ##
 File path: MKLDNN_README.md
 ##
 @@ -0,0 +1,287 @@
+# Build/Install MXNet with MKL-DNN
+
+Contents
+
+* [1. Linux](#1)
+* [2. MacOS](#2)
+* [3. Windows](#3)
+* [4. Verify MXNet with python](#4)
+* [5. Enable MKL BLAS](#5)
+
+Linux
+
+### Prerequisites
+
+```
+apt-get update && apt-get install -y build-essential git libopencv-dev curl 
gcc libopenblas-dev python python-pip python-dev python-opencv graphviz 
python-scipy python-sklearn
 
 Review comment:
   keep consistent with https://mxnet.incubator.apache.org/install/index.html


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #11213: [MXNET-533] MXNet-ONNX export

2018-06-19 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #11213: [MXNET-533] 
MXNet-ONNX export
URL: https://github.com/apache/incubator-mxnet/pull/11213#discussion_r196630413
 
 

 ##
 File path: python/mxnet/contrib/onnx/_export/op_translations.py
 ##
 @@ -0,0 +1,1667 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+# Based on
+#  https://github.com/NVIDIA/mxnet_to_onnx/blob/master/mx2onnx_converter/
+# mx2onnx_converter_functions.py
+#  Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+#
+#  Redistribution and use in source and binary forms, with or without
+#  modification, are permitted provided that the following conditions
+#  are met:
+#  * Redistributions of source code must retain the above copyright
+#notice, this list of conditions and the following disclaimer.
+#  * Redistributions in binary form must reproduce the above copyright
+#notice, this list of conditions and the following disclaimer in the
+#documentation and/or other materials provided with the distribution.
+#  * Neither the name of NVIDIA CORPORATION nor the names of its
+#contributors may be used to endorse or promote products derived
+#from this software without specific prior written permission.
+#
+#  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
+#  EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+#  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+#  PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+#  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+#  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+#  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+#  PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+#  OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# coding: utf-8
+# pylint: disable=too-many-locals,no-else-return,too-many-lines
+# pylint: disable=anomalous-backslash-in-string,eval-used
+"""
+Conversion Functions for common layers.
+Add new functions here with a decorator.
+"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+import re
+import numpy as np
+
+from onnx import helper, numpy_helper, mapping
+from .export_onnx import MXNetGraph as mx_op
+
+
+@mx_op.register("null")
+def convert_weights_and_inputs(node, **kwargs):
+"""Helper function to convert weights and inputs.
+"""
+name = node["name"]
+
+if kwargs["is_input"] is False:
+weights = kwargs["weights"]
+initializer = kwargs["initializer"]
+np_arr = weights[name]
+data_type = mapping.NP_TYPE_TO_TENSOR_TYPE[np_arr.dtype]
+dims = np.shape(np_arr)
+
+tensor_node = helper.make_tensor_value_info(name, data_type, dims)
+
+initializer.append(
+helper.make_tensor(
+name=name,
+data_type=data_type,
+dims=dims,
+vals=np_arr.flatten().tolist(),
+raw=False,
+)
+)
+
+return [tensor_node]
+else:
+tval_node = helper.make_tensor_value_info(name, kwargs["in_type"], 
kwargs["in_shape"])
 
 Review comment:
   What about moving all these free text Strings to a Constants file?  Because 
param names, keys all changing rapidly with ONNX and it will be hard to 
maintain and easy for slipping bugs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sandeep-krishnamurthy commented on a change in pull request #11213: [MXNET-533] MXNet-ONNX export

2018-06-19 Thread GitBox
sandeep-krishnamurthy commented on a change in pull request #11213: [MXNET-533] 
MXNet-ONNX export
URL: https://github.com/apache/incubator-mxnet/pull/11213#discussion_r196629699
 
 

 ##
 File path: python/mxnet/contrib/onnx/_export/export_onnx.py
 ##
 @@ -0,0 +1,267 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+#
+# Based on
+# 
https://github.com/NVIDIA/mxnet_to_onnx/blob/master/mx2onnx_converter/mx2onnx_converter.py#
+#  Copyright (c) 2017, NVIDIA CORPORATION. All rights reserved.
+#
+#  Redistribution and use in source and binary forms, with or without
+#  modification, are permitted provided that the following conditions
+#  are met:
+#  * Redistributions of source code must retain the above copyright
+#notice, this list of conditions and the following disclaimer.
+#  * Redistributions in binary form must reproduce the above copyright
+#notice, this list of conditions and the following disclaimer in the
+#documentation and/or other materials provided with the distribution.
+#  * Neither the name of NVIDIA CORPORATION nor the names of its
+#contributors may be used to endorse or promote products derived
+#from this software without specific prior written permission.
+#
+#  THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS ``AS IS'' AND ANY
+#  EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
+#  IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+#  PURPOSE ARE DISCLAIMED.  IN NO EVENT SHALL THE COPYRIGHT OWNER OR
+#  CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
+#  EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
+#  PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
+#  PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY
+#  OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+#  (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
+#  OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
+
+# coding: utf-8
+# pylint: disable=invalid-name,too-many-locals,no-self-use,too-many-arguments,
+# pylint: disable=maybe-no-member,too-many-nested-blocks
+"""MXNet to ONNX graph converter functions"""
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+from __future__ import unicode_literals
+
+import json
+import numpy as np
+
+from onnx import (checker, helper, onnx_pb2)
+from onnx.helper import make_tensor_value_info
+
+from  import context
+from  import ndarray as nd
+from  import io
+from  import module as mod
+
+
+class MXNetGraph(object):
+"""Class to convert MXNet to ONNX graph"""
+registry_ = {}
+input_output_maps_ = {}
+
+def __init__(self):
+# topologically sorted nodes
+self.nodes = []
+self.input_tensors = []
+self.output_tensors = []
+
+@staticmethod
+def register(op_name):
+"""Register operator"""
+def wrapper(func):
+"""Helper function to map functions"""
+MXNetGraph.registry_[op_name] = func
+return func
+
+return wrapper
+
+@staticmethod
+def convert_layer(node, **kwargs):
+"""Convert MXNet layer to ONNX"""
+op = str(node["op"])
+if op not in MXNetGraph.registry_:
+raise AttributeError("No conversion function registered for op 
type %s yet." % op)
+convert_fun = MXNetGraph.registry_[op]
 
 Review comment:
   @Roshrini 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2018-06-19 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 20be755  Bump the publish timestamp.
20be755 is described below

commit 20be75552f17ecc115c896ba6dde797edd1f0ca2
Author: mxnet-ci 
AuthorDate: Wed Jun 20 01:36:13 2018 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..dfbfce5
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Wed Jun 20 01:36:13 UTC 2018



[GitHub] coldsheephot commented on issue #11299: row_sparse_pull, push row_sparse gradient is too slow, it has 10+ times difference

2018-06-19 Thread GitBox
coldsheephot commented on issue #11299: row_sparse_pull,push row_sparse 
gradient is too slow,it has 10+ times difference
URL: 
https://github.com/apache/incubator-mxnet/issues/11299#issuecomment-398595855
 
 
   How long does it take to solve those problems???I am very anxious


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #11333: [WIP] fix issue of test_gru_bidirectional #11219 and add robust code

2018-06-19 Thread GitBox
TaoLv commented on issue #11333: [WIP] fix issue of test_gru_bidirectional 
#11219 and add robust code
URL: https://github.com/apache/incubator-mxnet/pull/11333#issuecomment-398594512
 
 
   @szha could you help to review this fix?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #11266: [MXNET-514] Add clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer init_kvstore

2018-06-19 Thread GitBox
eric-haibin-lin commented on issue #11266: [MXNET-514] Add 
clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer 
init_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11266#issuecomment-398592893
 
 
   It's a flaky one. I don't see it fail on master build. Maybe it's to do with 
param.load/save on windows... 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] coldsheephot commented on issue #11299: row_sparse_pull, push row_sparse gradient is too slow, it has 10+ times difference

2018-06-19 Thread GitBox
coldsheephot commented on issue #11299: row_sparse_pull,push row_sparse 
gradient is too slow,it has 10+ times difference
URL: 
https://github.com/apache/incubator-mxnet/issues/11299#issuecomment-398591011
 
 
   @eric-haibin-lin  yes. I want to use the feature for multi-device and 
multi-machine case. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup Learning Rate Scheduler and fix bugs in LR Schedulers

2018-06-19 Thread GitBox
rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup 
Learning Rate Scheduler and fix bugs in LR Schedulers
URL: https://github.com/apache/incubator-mxnet/pull/11234#discussion_r196620650
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -153,18 +153,57 @@ class PolyScheduler(LRScheduler):
 
 """
 
-def __init__(self, max_update, base_lr=0.01, pwr=2):
 
 Review comment:
   Now base_lr is going in to super through kwargs. Would that still break the 
API? 
   The issue was that base_lr wasn't being set by this optimizer. Which was the 
source of bugs when using the symbolic examples. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #11266: [MXNET-514] Add clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer init_kvstore

2018-06-19 Thread GitBox
eric-haibin-lin commented on issue #11266: [MXNET-514] Add 
clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer 
init_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11266#issuecomment-398589396
 
 
   @ThomasDelteil let me look into it
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup Learning Rate Scheduler and fix bugs in LR Schedulers

2018-06-19 Thread GitBox
rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup 
Learning Rate Scheduler and fix bugs in LR Schedulers
URL: https://github.com/apache/incubator-mxnet/pull/11234#discussion_r196620650
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -153,18 +153,57 @@ class PolyScheduler(LRScheduler):
 
 """
 
-def __init__(self, max_update, base_lr=0.01, pwr=2):
 
 Review comment:
   Now base_lr is going in to super through kwargs. Would that still break the 
API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup Learning Rate Scheduler and fix bugs in LR Schedulers

2018-06-19 Thread GitBox
rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup 
Learning Rate Scheduler and fix bugs in LR Schedulers
URL: https://github.com/apache/incubator-mxnet/pull/11234#discussion_r196620650
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -153,18 +153,57 @@ class PolyScheduler(LRScheduler):
 
 """
 
-def __init__(self, max_update, base_lr=0.01, pwr=2):
 
 Review comment:
   Now base_lr is going in to super through kwargs. Would that still break the 
API? 
   The issue was that base_lr wasn't being set by this optimizer. Which was the 
source of bugs when using the symbolic examples. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup Learning Rate Scheduler and fix bugs in LR Schedulers

2018-06-19 Thread GitBox
rahul003 commented on a change in pull request #11234: [MXNET-535] Add Warmup 
Learning Rate Scheduler and fix bugs in LR Schedulers
URL: https://github.com/apache/incubator-mxnet/pull/11234#discussion_r196620100
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -153,18 +153,57 @@ class PolyScheduler(LRScheduler):
 
 """
 
-def __init__(self, max_update, base_lr=0.01, pwr=2):
-super(PolyScheduler, self).__init__(base_lr)
+def __init__(self, max_update, pwr=2, **kwargs):
+super(PolyScheduler, self).__init__(**kwargs)
 assert isinstance(max_update, int)
 if max_update < 1:
 raise ValueError("maximum number of updates must be strictly 
positive")
 self.base_lr_orig = self.base_lr
 self.max_update = max_update
 self.power = pwr
-self.base_lr = self.base_lr_orig
 
 def __call__(self, num_update):
 if num_update <= self.max_update:
 self.base_lr = self.base_lr_orig * pow(1.0 - float(num_update) / 
float(self.max_update),
self.power)
 return self.base_lr
+
+class WarmupScheduler(LRScheduler):
+"""Implement linear warmup starting from lr_begin to given scheduler's 
base_lr.
+
+Parameters
+--
+lr_begin: float
+  learning rate to start increasing from
+warmup_steps: int
+  number of warmup steps
+scheduler: LRScheduler
+  scheduler following the warmup
+"""
+def __init__(self, lr_begin, warmup_steps, scheduler):
+super(WarmupScheduler, self).__init__()
+self.lr_begin = lr_begin
+self.scheduler = scheduler
+self.lr_final = self.scheduler.base_lr
+if self.lr_begin > self.lr_final:
+raise ValueError("Final lr has to be higher than beginning lr")
+if warmup_steps <= 0:
+raise ValueError("Warmup steps has to be positive")
+self.warmup_steps = warmup_steps
+self.lrs_updates = {}
 
 Review comment:
   We would have for each batch, number of calls to __call__ equal to the 
number of learnable parameter arrays. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #10945: Flaky test: test_operator_gpu.test_sgd

2018-06-19 Thread GitBox
larroy commented on issue #10945: Flaky test: test_operator_gpu.test_sgd
URL: 
https://github.com/apache/incubator-mxnet/issues/10945#issuecomment-398588103
 
 
   Hi @szha @haojin2 I think this is a misunderstanding. Apologies. I don't 
have any evidence that the failure is still ocurring. I was actually not aware 
a fix has been committed, this would explain why I was not able to reproduce. 
If you have not seen this happening again then we can close.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 closed issue #10945: Flaky test: test_operator_gpu.test_sgd

2018-06-19 Thread GitBox
haojin2 closed issue #10945: Flaky test: test_operator_gpu.test_sgd
URL: https://github.com/apache/incubator-mxnet/issues/10945
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] hetong007 commented on a change in pull request #11234: [MXNET-535] Add Warmup Learning Rate Scheduler and fix bugs in LR Schedulers

2018-06-19 Thread GitBox
hetong007 commented on a change in pull request #11234: [MXNET-535] Add Warmup 
Learning Rate Scheduler and fix bugs in LR Schedulers
URL: https://github.com/apache/incubator-mxnet/pull/11234#discussion_r196619173
 
 

 ##
 File path: python/mxnet/lr_scheduler.py
 ##
 @@ -153,18 +153,57 @@ class PolyScheduler(LRScheduler):
 
 """
 
-def __init__(self, max_update, base_lr=0.01, pwr=2):
-super(PolyScheduler, self).__init__(base_lr)
+def __init__(self, max_update, pwr=2, **kwargs):
+super(PolyScheduler, self).__init__(**kwargs)
 assert isinstance(max_update, int)
 if max_update < 1:
 raise ValueError("maximum number of updates must be strictly 
positive")
 self.base_lr_orig = self.base_lr
 self.max_update = max_update
 self.power = pwr
-self.base_lr = self.base_lr_orig
 
 def __call__(self, num_update):
 if num_update <= self.max_update:
 self.base_lr = self.base_lr_orig * pow(1.0 - float(num_update) / 
float(self.max_update),
self.power)
 return self.base_lr
+
+class WarmupScheduler(LRScheduler):
+"""Implement linear warmup starting from lr_begin to given scheduler's 
base_lr.
+
+Parameters
+--
+lr_begin: float
+  learning rate to start increasing from
+warmup_steps: int
+  number of warmup steps
+scheduler: LRScheduler
+  scheduler following the warmup
+"""
+def __init__(self, lr_begin, warmup_steps, scheduler):
+super(WarmupScheduler, self).__init__()
+self.lr_begin = lr_begin
+self.scheduler = scheduler
+self.lr_final = self.scheduler.base_lr
+if self.lr_begin > self.lr_final:
+raise ValueError("Final lr has to be higher than beginning lr")
+if warmup_steps <= 0:
+raise ValueError("Warmup steps has to be positive")
+self.warmup_steps = warmup_steps
+self.lrs_updates = {}
+self.lr_difference = self.lr_final - self.lr_begin
+
+def __call__(self, num_update):
+if num_update not in self.lrs_updates:
+if num_update < self.warmup_steps:
+increase = self.lr_difference * 
float(num_update)/float(self.warmup_steps)
+self.lrs_updates[num_update] = self.lr_begin + increase
+else:
+if isinstance(self.scheduler, PolyScheduler):
+self.lrs_updates[num_update] = self.scheduler(num_update - 
self.warmup_steps)
 
 Review comment:
   PolyScheduler, and CosineScheduler(not implemented here) reduce lr from a 
"starting lr" to an "ending lr" **smoothly**, for example from 0.1 to 0.
   
   With warmup, we first increase the lr from a small value (e.g. 0) to the 
starting lr, then apply the main scheduler. Assuming we have warmup for the 
first 5 epochs, and the total training epochs is 90, then the effective number 
of epochs for the poly scheduler is 85.
   
   As for piecewise-constant/factor scheduler, it decays the lr only at certain 
points, on which the warmup stage has no effect.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil closed issue #10212: MXNet.io website install guide does not have a c++ section

2018-06-19 Thread GitBox
ThomasDelteil closed issue #10212: MXNet.io website install guide does not have 
a c++ section
URL: https://github.com/apache/incubator-mxnet/issues/10212
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-19 Thread GitBox
marcoabreu commented on issue #11340: [MXNET-559] Scripts for running the  
Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#issuecomment-398585346
 
 
   Looks very good besides minor changes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts 
for running the  Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#discussion_r196617428
 
 

 ##
 File path: tests/nightly/broken_link_checker_test/README.md
 ##
 @@ -0,0 +1,13 @@
+# Broken link checker test
 
 Review comment:
   Great!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts 
for running the  Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#discussion_r196617399
 
 

 ##
 File path: tests/nightly/broken_link_checker_test/JenkinsfileForBLC
 ##
 @@ -0,0 +1,74 @@
+// -*- mode: groovy -*-
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+//This is a Jenkinsfile for the broken link checker test.
+
+err = null
+
+def init_git() {
+  deleteDir()
+  retry(5) {
+try {
+  timeout(time: 15, unit: 'MINUTES') {
+checkout scm
+sh 'git submodule update --init --recursive'
+sh 'git clean -d -f'
+  }
+} catch (exc) {
+  deleteDir()
+  error "Failed to fetch source codes with ${exc}"
+  sleep 2
+}
+  }
+}
+
+
+try {
+  stage('BLC'){
+parallel 'BrokenLinkChecker: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/brokenLinkChecker') {
+  init_git()
+  withCredentials([usernamePassword(credentialsId: 'github-leleamol', 
passwordVariable: 'APACHE_PASSWORD', usernameVariable: 'APACHE_USERNAME')]) {
+ sh 'aws s3 cp s3://mxnet-ci-prod-slave-data/url_list.txt  
./tests/nightly/broken_link_checker_test/url_list.txt'
+  sh "ci/build.py --platform ubuntu_blc /work/runtime_functions.sh 
broken_link_checker"
 
 Review comment:
   Please use the docker_run from the main Jenkinsfile. Otherwise, this run 
will not use our docker cache and slow down the run. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #11340: [MXNET-559] Scripts 
for running the  Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340#discussion_r196617118
 
 

 ##
 File path: tests/nightly/broken_link_checker_test/JenkinsfileForBLC
 ##
 @@ -0,0 +1,74 @@
+// -*- mode: groovy -*-
+// Licensed to the Apache Software Foundation (ASF) under one
+// or more contributor license agreements.  See the NOTICE file
+// distributed with this work for additional information
+// regarding copyright ownership.  The ASF licenses this file
+// to you under the Apache License, Version 2.0 (the
+// "License"); you may not use this file except in compliance
+// with the License.  You may obtain a copy of the License at
+//
+//   http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing,
+// software distributed under the License is distributed on an
+// "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+// KIND, either express or implied.  See the License for the
+// specific language governing permissions and limitations
+// under the License.
+
+
+//This is a Jenkinsfile for the broken link checker test.
+
+err = null
+
+def init_git() {
+  deleteDir()
+  retry(5) {
+try {
+  timeout(time: 15, unit: 'MINUTES') {
+checkout scm
+sh 'git submodule update --init --recursive'
+sh 'git clean -d -f'
+  }
+} catch (exc) {
+  deleteDir()
+  error "Failed to fetch source codes with ${exc}"
+  sleep 2
+}
+  }
+}
+
+
+try {
+  stage('BLC'){
+parallel 'BrokenLinkChecker: CPU': {
+  node('mxnetlinux-cpu') {
+ws('workspace/brokenLinkChecker') {
+  init_git()
+  withCredentials([usernamePassword(credentialsId: 'github-leleamol', 
passwordVariable: 'APACHE_PASSWORD', usernameVariable: 'APACHE_USERNAME')]) {
 
 Review comment:
   Do we still need the credentials?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on issue #11296: Adding 2 tutorials on Learning Rate Schedules

2018-06-19 Thread GitBox
thomelane commented on issue #11296: Adding 2 tutorials on Learning Rate 
Schedules
URL: https://github.com/apache/incubator-mxnet/pull/11296#issuecomment-398584571
 
 
   @KellenSunderland thanks for the review and feedback! Made changes as 
suggested, and fixed off by one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu opened a new issue #11342: cudnnoff feature is not in gluon

2018-06-19 Thread GitBox
wenyangchu opened a new issue #11342: cudnnoff feature is not in gluon
URL: https://github.com/apache/incubator-mxnet/issues/11342
 
 
   Hi all,
   In the symbol interface, at least cudnn_tune and cudnn_off parameters are 
included in gluon interface:
   
   mxnet.symbol.Convolution(data=None, weight=None, bias=None, kernel=_Null, 
stride=_Null, dilate=_Null, pad=_Null, num_filter=_Null, num_group=_Null, 
workspace=_Null, no_bias=_Null, cudnn_tune=_Null, cudnn_off=_Null, 
layout=_Null, name=None, attr=None, out=None, **kwargs)¶
   
   Can we have them in gluon? They are useful for choosing desired algorithms 
or only use cuda implementation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #11296: Adding 2 tutorials on Learning Rate Schedules

2018-06-19 Thread GitBox
thomelane commented on a change in pull request #11296: Adding 2 tutorials on 
Learning Rate Schedules
URL: https://github.com/apache/incubator-mxnet/pull/11296#discussion_r196614808
 
 

 ##
 File path: docs/tutorials/gluon/learning_rate_schedules.md
 ##
 @@ -0,0 +1,317 @@
+
+# Learning Rate Schedules
+
+Setting the learning rate for stochastic gradient descent (SGD) is crucially 
important when training neural network because it controls both the speed of 
convergence and the ultimate performance of the network. One of the simplest 
learning rate strategies is to have a fixed learning rate throughout the 
training process. Choosing a small learning rate allows the optimizer find good 
solutions but this comes at the expense of limiting the initial speed of 
convergence. Changing the learning rate over time can overcome this tradeoff.
 
 Review comment:
   Corrected.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #11296: Adding 2 tutorials on Learning Rate Schedules

2018-06-19 Thread GitBox
thomelane commented on a change in pull request #11296: Adding 2 tutorials on 
Learning Rate Schedules
URL: https://github.com/apache/incubator-mxnet/pull/11296#discussion_r196614637
 
 

 ##
 File path: docs/tutorials/gluon/learning_rate_schedules.md
 ##
 @@ -0,0 +1,317 @@
+
+# Learning Rate Schedules
+
+Setting the learning rate for stochastic gradient descent (SGD) is crucially 
important when training neural network because it controls both the speed of 
convergence and the ultimate performance of the network. One of the simplest 
learning rate strategies is to have a fixed learning rate throughout the 
training process. Choosing a small learning rate allows the optimizer find good 
solutions but this comes at the expense of limiting the initial speed of 
convergence. Changing the learning rate over time can overcome this tradeoff.
+
+Schedules define how the learning rate changes over time and are typically 
specified for each epoch or iteration (i.e. batch) of training. Schedules 
differ from adaptive methods (such as AdaDelta and Adam) because they:
+
+* change the global learning rate for the optimizer, rather than 
parameter-wise learning rates
+* don't take feedback from the training process and are specified beforehand
+
+In this tutorial, we visualize the schedules defined in `mx.lr_scheduler`, 
show how to implement custom schedules and see an example of using a schedule 
while training models. Since schedules are passed to `mx.optimizer.Optimizer` 
classes, these methods work with both Module and Gluon APIs.
+
+
+```python
+%matplotlib inline
+from __future__ import print_function
+import math
+import matplotlib.pyplot as plt
+import mxnet as mx
+from mxnet.gluon import nn
+from mxnet.gluon.data.vision import transforms
+import numpy as np
+```
+
+```python
+def plot_schedule(schedule_fn, iterations=1500):
+iterations = [i for i in range(iterations)]
+lrs = [schedule_fn(i) for i in iterations]
+plt.scatter(iterations, lrs)
+plt.xlabel("Iteration")
+plt.ylabel("Learning Rate")
+plt.show()
+```
+
+## Schedules
+
+### Stepwise Decay Schedule
+
+One of the most commonly used learning rate schedules is called stepwise 
decay, where the learning rate is reduced by a factor at certain intervals. 
MXNet implements a `FactorScheduler` for equally spaced intervals, and 
`MultiFactorScheduler` for greater control. We start with an example of halving 
the learning rate every 250 iterations.
+
+
+```python
+schedule = mx.lr_scheduler.FactorScheduler(step=250, factor=0.5)
+schedule.base_lr = 1
+plot_schedule(schedule)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/lr_schedules/factor.png)
 
+
+
+Note: the `base_lr` is used to determine the initial learning rate. It takes a 
default value of 0.01 since we inherit from `mx.lr_scheduler.LRScheduler`, but 
it can be set as a property of the schedule. We will see later in this tutorial 
that `base_lr` is set automatically when providing the `lr_schedule` to 
`Optimizer`. Also be aware that the schedules in `mx.lr_scheduler` have state 
(i.e. counters, etc) so calling the schedule out of order may give unexpected 
results.
+
+We can define non-uniform intervals with `MultiFactorScheduler` and in the 
example below we halve the learning rate at iterations 250, 750 (500 iterations 
after) and 900 (150 iterations after).
+
+
+```python
+schedule = mx.lr_scheduler.MultiFactorScheduler(step=[250, 750, 900], 
factor=0.5)
+schedule.base_lr = 1
+plot_schedule(schedule)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/lr_schedules/multifactor.png)
 
+
+
+### Polynomial Schedule
+
+Stepwise schedules and the discontinuities they introduce may sometimes lead 
to instability in the optimization, so in some cases smoother schedules are 
preferred. `PolyScheduler` gives a smooth decay using a polynomial function and 
reaches a learning rate of 0 after `max_update` iterations. In the example 
below, we have a quadratic function (`pwr=2`) that falls from 1 to 0 over 1000 
iterations. After this the learning rate stays at 0 so nothing will be learnt 
from `max_update` iterations onwards.
+
+
+```python
+schedule = mx.lr_scheduler.PolyScheduler(max_update=1000, base_lr=1, pwr=2)
+plot_schedule(schedule)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/lr_schedules/polynomial.png)
 
+
+
+Note: unlike `FactorScheduler`, the `base_lr` is set as an argument when 
instantiating the schedule.
+
+### Custom Schedules
+
+You can implement your own custom schedule with a function or callable class, 
that takes an integer denoting the iteration index (e.g. 123) and returns a 
float representing the learning rate to be used for that iteration. We 
implement the Cosine Annealing Schedule in the example below as a callable 
class (see `__call__` method).
+
+
+```python
+class CosineAnnealingSchedule():
+def 

[GitHub] thomelane commented on a change in pull request #11296: Adding 2 tutorials on Learning Rate Schedules

2018-06-19 Thread GitBox
thomelane commented on a change in pull request #11296: Adding 2 tutorials on 
Learning Rate Schedules
URL: https://github.com/apache/incubator-mxnet/pull/11296#discussion_r196613954
 
 

 ##
 File path: docs/tutorials/gluon/learning_rate_schedules_advanced.md
 ##
 @@ -0,0 +1,308 @@
+
+ # Advanced Learning Rate Schedules
+
+Given the importance of learning rate and the learning rate schedule for 
training neural networks, there have been a number of research papers published 
recently on the subject. Although many practitioners are using simple learning 
rate schedules such as stepwise decay, research has shown that there are other 
strategies that work better in most situations. We implement a number of 
different schedule shapes in this tutorial and introduce cyclical schedules.
+
+See the "Learning Rate Schedules" tutorial for a more basic overview of 
learning rates, and an example of how to use them while training your own 
models.
+
+
+```python
+%matplotlib inline
+import copy
+import math
+import mxnet as mx
+import numpy as np
+import matplotlib.pyplot as plt
+```
+
+
+```python
+def plot_schedule(schedule_fn, iterations=1500):
+iterations = [i for i in range(iterations)]
+lrs = [schedule_fn(i) for i in iterations]
+plt.scatter(iterations, lrs)
+plt.xlabel("Iteration")
+plt.ylabel("Learning Rate")
+plt.show()
+```
+
+## Custom Schedule Shapes
+
+### (Slanted) Triangular
+
+While trying to push the boundaries of batch size for faster training, [Priya 
Goyal et al. (2017)](https://arxiv.org/abs/1706.02677) found that having a 
smooth linear warm up in the learning rate at the start of training improved 
the stability of the optimizer and lead to better solutions. It was found that 
a smooth increases gave improved performance over stepwise increases.
+
+We look at "warm-up" in more detail later in the tutorial, but this could be 
viewed as a specific case of the **"triangular"** schedule that was proposed by 
[Leslie N. Smith (2015)](https://arxiv.org/abs/1506.01186). Quite simply, the 
schedule linearly increases then decreases between a lower and upper bound. 
Originally it was suggested this schedule be used as part of a cyclical 
schedule but more recently researchers have been using a single cycle.
+
+One adjustment proposed by [Jeremy Howard, Sebastian Ruder 
(2018)](https://arxiv.org/abs/1801.06146) was to change the ratio between the 
increasing and decreasing stages, instead of the 50:50 split. Changing the 
increasing fraction (`inc_fraction!=0.5`) leads to a **"slanted triangular"** 
schedule. Using `inc_fraction<0.5` tends to give better results.
+
+
+```python
+class TriangularSchedule():
+def __init__(self, min_lr, max_lr, cycle_length, inc_fraction=0.5): 
+"""
+min_lr: lower bound for learning rate (float)
+max_lr: upper bound for learning rate (float)
+cycle_length: iterations between start and finish (int)
+inc_fraction: fraction of iterations spent in increasing stage (float)
+"""
+self.min_lr = min_lr
+self.max_lr = max_lr
+self.cycle_length = cycle_length
+self.inc_fraction = inc_fraction
+
+def __call__(self, iteration):
+if iteration <= self.cycle_length*self.inc_fraction:
+unit_cycle = iteration * 1/(self.cycle_length*self.inc_fraction)
+elif iteration <= self.cycle_length:
+unit_cycle = (self.cycle_length - iteration) * 
1/(self.cycle_length*(1-self.inc_fraction))
+else:
+unit_cycle = 0
+adjusted_cycle = (unit_cycle * (self.max_lr - self.min_lr)) + 
self.min_lr
+return adjusted_cycle
+```
+
+We look an example of a slanted triangular schedule that increases from a 
learning rate of 1 to 2, and back to 1 over 1000 iterations. Since we set 
`inc_fraction=0.2`, 200 iterations are used for the increasing stage, and 800 
for the decreasing stage. After this, the schedule stays at the lower bound 
indefinitely.
+
+
+```python
+schedule = TriangularSchedule(min_lr=1, max_lr=2, cycle_length=1000, 
inc_fraction=0.2)
+plot_schedule(schedule)
+```
+
+
+![png](https://raw.githubusercontent.com/dmlc/web-data/master/mxnet/doc/tutorials/lr_schedules/adv_triangular.png)
 
+
+
+### Cosine
+
+Continuing with the idea that smooth decay profiles give improved performance 
over stepwise decay, [Ilya Loshchilov, Frank Hutter 
(2016)](https://arxiv.org/abs/1608.03983) used **"cosine annealing"** schedules 
to good effect. As with triangular schedules, the original idea was that this 
should be used as part of a cyclical schedule, but we begin by implementing the 
cosine annealing component before the full Stochastic Gradient Descent with 
Warm Restarts (SGDR) method later in the tutorial.
+
+
+```python
+class CosineAnnealingSchedule():
+def __init__(self, min_lr, max_lr, cycle_length):
+"""
+min_lr: lower bound for learning rate (float)
+max_lr: upper bound 

[GitHub] thomelane commented on a change in pull request #11296: Adding 2 tutorials on Learning Rate Schedules

2018-06-19 Thread GitBox
thomelane commented on a change in pull request #11296: Adding 2 tutorials on 
Learning Rate Schedules
URL: https://github.com/apache/incubator-mxnet/pull/11296#discussion_r196614325
 
 

 ##
 File path: docs/tutorials/gluon/learning_rate_schedules_advanced.md
 ##
 @@ -0,0 +1,308 @@
+
+ # Advanced Learning Rate Schedules
+
+Given the importance of learning rate and the learning rate schedule for 
training neural networks, there have been a number of research papers published 
recently on the subject. Although many practitioners are using simple learning 
rate schedules such as stepwise decay, research has shown that there are other 
strategies that work better in most situations. We implement a number of 
different schedule shapes in this tutorial and introduce cyclical schedules.
+
+See the "Learning Rate Schedules" tutorial for a more basic overview of 
learning rates, and an example of how to use them while training your own 
models.
+
+
+```python
+%matplotlib inline
+import copy
+import math
+import mxnet as mx
+import numpy as np
+import matplotlib.pyplot as plt
+```
+
+
+```python
+def plot_schedule(schedule_fn, iterations=1500):
+iterations = [i for i in range(iterations)]
+lrs = [schedule_fn(i) for i in iterations]
+plt.scatter(iterations, lrs)
+plt.xlabel("Iteration")
+plt.ylabel("Learning Rate")
+plt.show()
+```
+
+## Custom Schedule Shapes
+
+### (Slanted) Triangular
+
+While trying to push the boundaries of batch size for faster training, [Priya 
Goyal et al. (2017)](https://arxiv.org/abs/1706.02677) found that having a 
smooth linear warm up in the learning rate at the start of training improved 
the stability of the optimizer and lead to better solutions. It was found that 
a smooth increases gave improved performance over stepwise increases.
+
+We look at "warm-up" in more detail later in the tutorial, but this could be 
viewed as a specific case of the **"triangular"** schedule that was proposed by 
[Leslie N. Smith (2015)](https://arxiv.org/abs/1506.01186). Quite simply, the 
schedule linearly increases then decreases between a lower and upper bound. 
Originally it was suggested this schedule be used as part of a cyclical 
schedule but more recently researchers have been using a single cycle.
+
+One adjustment proposed by [Jeremy Howard, Sebastian Ruder 
(2018)](https://arxiv.org/abs/1801.06146) was to change the ratio between the 
increasing and decreasing stages, instead of the 50:50 split. Changing the 
increasing fraction (`inc_fraction!=0.5`) leads to a **"slanted triangular"** 
schedule. Using `inc_fraction<0.5` tends to give better results.
+
+
+```python
+class TriangularSchedule():
+def __init__(self, min_lr, max_lr, cycle_length, inc_fraction=0.5): 
+"""
+min_lr: lower bound for learning rate (float)
+max_lr: upper bound for learning rate (float)
+cycle_length: iterations between start and finish (int)
+inc_fraction: fraction of iterations spent in increasing stage (float)
+"""
+self.min_lr = min_lr
+self.max_lr = max_lr
+self.cycle_length = cycle_length
+self.inc_fraction = inc_fraction
+
+def __call__(self, iteration):
+if iteration <= self.cycle_length*self.inc_fraction:
 
 Review comment:
   Changed throughout.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU

2018-06-19 Thread GitBox
haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense 
and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU
URL: https://github.com/apache/incubator-mxnet/pull/11330#issuecomment-398583696
 
 
   Benchmark result for add_n(more than 4 inputs with at least 1 dense) = dense:
   ([density%] [speedup])
   CPU:
   1.00% 1.4248320861874664
   0.50% 1.4591373125830511
   0.10% 1.487516900293522
   0.05% 1.4891773584928327
   0.01% 1.483387504757
   GPU:
   1.00% 1.5829503717448206
   0.50% 1.612348854910054
   0.10% 1.6657770987040201
   0.05% 1.6743607944367647
   0.01% 1.6844786052948375
   Benchmark script:
   ```python
   import mxnet as mx
   import sys
   import os
   import scipy
   import numpy as np
   from mxnet.test_utils import rand_ndarray, assert_almost_equal
   import time
   
   def measure_cost(repeat, a, b, c, d, e, out=None):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(mx.nd.sparse.add_n(a, b, c, d, e, out=out))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def measure_fallback(repeat, a):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(a.tostype('default'))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def main():
   shape = (100, 128)
   dns = np.random.uniform(size=shape)
   context = mx.gpu(0)
   # context = mx.cpu()
   mx_dns1 = mx.nd.array(dns, ctx=context)
   mx_dns2 = mx.nd.array(dns, ctx=context)
   mx_dns3 = mx.nd.array(dns, ctx=context)
   for density in [0.01, 0.005, 0.001, 0.0005, 0.0001]:
   mx_csr = rand_ndarray(shape=shape, stype='csr', 
density=density).as_in_context(context)
   mx_csr_dns = mx_csr.tostype('default')
   mx_rsp = rand_ndarray(shape=shape, stype='row_sparse', 
density=density).as_in_context(context)
   mx_rsp_dns = mx_rsp.tostype('default')
   sparse_cost = 0.0
   dns_cost = 0.0
   mx.nd.waitall()
   #warmup
   check = mx.nd.sparse.add_n(mx_dns1, mx_csr, mx_rsp, mx_dns2, mx_dns3)
   dns1 = dns + mx_csr_dns.asnumpy() + mx_rsp_dns.asnumpy() + dns + dns
   assert_almost_equal(check.asnumpy(), dns1, atol=1e-5, rtol=1e-4)
   mx.nd.waitall()
   for i in range(20):
   sparse_cost += measure_cost(5, mx_dns1, mx_csr, mx_dns2, mx_rsp, 
mx_dns3)
   dns_cost += measure_cost(5, mx_dns1, mx_csr_dns, mx_dns2, 
mx_rsp_dns, mx_dns3)
   print("%.2f %%" % (density*100), dns_cost / sparse_cost)
   
   
   if __name__ == "__main__":
   main()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu commented on issue #11339: compiling error when USE_CUDA=1 and USE_CUDNN=0 with make command

2018-06-19 Thread GitBox
wenyangchu commented on issue #11339: compiling error when USE_CUDA=1  and 
USE_CUDNN=0 with make command
URL: 
https://github.com/apache/incubator-mxnet/issues/11339#issuecomment-398583421
 
 
   By the way, I think jenkins does not test this configuration I suppose


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu opened a new issue #11341: Deterministic cudnn algorithms

2018-06-19 Thread GitBox
wenyangchu opened a new issue #11341: Deterministic cudnn algorithms
URL: https://github.com/apache/incubator-mxnet/issues/11341
 
 
   Hi all,
   
   I see other frameworks such as pytorch have some actions to have a flag to 
force cudnn to only use deterministic algorithm, specially for convolution and 
maxpooling:
   
   like in pytorch:
   [example](https://github.com/pytorch/pytorch/pull/2893/files)
   
   Some references to cudnn:
   
[maxpooling](https://docs.nvidia.com/deeplearning/sdk/cudnn-archived/cudnn_713/cudnn-developer-guide/index.html#cudnnPoolingMode_t
   )
   
[cudnnConvolutionBackwardData](https://docs.nvidia.com/deeplearning/sdk/cudnn-archived/cudnn_713/cudnn-developer-guide/index.html#cudnnConvolutionBackwardData)
   
   
[cudnnConvolutionBwdFilterAlgo_t](https://docs.nvidia.com/deeplearning/sdk/cudnn-archived/cudnn_713/cudnn-developer-guide/index.html#cudnnConvolutionBwdFilterAlgo_t)
   
   I am doing a medical product and reproducibility is an issue. Therefore, 
this feature will be very important for me.
   
   One solution is to have a flag like MXNET_CUDNN_AUTOTUNE_DEFAULT.
   Anyone is doing it? I have reviewed the current code and want to do it. 
   Any suggestion or better solution?
   
   Thanks
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] leleamol opened a new pull request #11340: [MXNET-559] Scripts for running the Broken link checker job

2018-06-19 Thread GitBox
leleamol opened a new pull request #11340: [MXNET-559] Scripts for running the  
Broken link checker job
URL: https://github.com/apache/incubator-mxnet/pull/11340
 
 
   ## Description ##
   This PR contains the scripts and jenkins files to run the broken link 
checker job
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [y] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [y] Changes are complete (i.e. I finished coding on this PR)
   - [NA] All changes have test coverage:
   - [Y] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   
   @marcoabreu 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU

2018-06-19 Thread GitBox
haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense 
and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU
URL: https://github.com/apache/incubator-mxnet/pull/11330#issuecomment-398581824
 
 
   Benchmark result for add_n(dense, csr, dense) = dense:
   ([density%] [speedup])
   CPU:
   1.00% 1.1282194997074237
   0.50% 1.1686160529139418
   0.10% 1.1909255730224886
   0.05% 1.1970586102280831
   0.01% 1.202483677804412
   GPU:
   1.00% 1.1627124767202126
   0.50% 1.2392510678426578
   0.10% 1.3169708612264934
   0.05% 1.3275811285384644
   0.01 % 1.3358768672033845
   benchmark script:
   ```python
   import mxnet as mx
   import sys
   import os
   import scipy
   import numpy as np
   from mxnet.test_utils import rand_ndarray, assert_almost_equal
   import time
   
   def measure_cost(repeat, a, b, c, out=None):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(mx.nd.sparse.add_n(a, b, c, out=out))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def measure_fallback(repeat, a):
   # start bench
   start = time.time()
   results = []
   for i in range(repeat):
   results.append(a.tostype('default'))
   for result in results:
   result.wait_to_read()
   end = time.time()
   diff = end - start
   return diff / repeat
   
   def main():
   shape = (128, 100)
   dns = np.random.uniform(size=shape)
   # context = mx.gpu(0)
   context = mx.cpu()
   mx_dns1 = mx.nd.array(dns, ctx=context)
   mx_dns2 = mx.nd.array(dns, ctx=context)
   for density in [0.01, 0.005, 0.001, 0.0005, 0.0001]:
   mx_csr = rand_ndarray(shape=shape, stype='csr', 
density=density).as_in_context(context)
   mx_csr_dns = mx_csr.tostype('default')
   sparse_cost = 0.0
   dns_cost = 0.0
   mx.nd.waitall()
   #warmup
   check = mx.nd.sparse.add_n(mx_dns1, mx_csr, mx_dns2)
   dns1 = dns + mx_csr_dns.asnumpy() + dns
   assert_almost_equal(check.asnumpy(), dns1, atol=1e-5, rtol=1e-4)
   mx.nd.waitall()
   for i in range(20):
   sparse_cost += measure_cost(5, mx_dns1, mx_csr, mx_dns2)
   dns_cost += measure_cost(5, mx_dns1, mx_csr_dns, mx_dns2)
   print("%.2f %%" % (density*100), dns_cost / sparse_cost)
   
   
   if __name__ == "__main__":
   main()
   ```
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU

2018-06-19 Thread GitBox
haojin2 commented on issue #11330: [MXNET-537] add_n(dense, csr, dense) = dense 
and add_n([dense, csr, rsp]*, dense, [dense, csr, rsp]*) = dense on CPU & GPU
URL: https://github.com/apache/incubator-mxnet/pull/11330#issuecomment-398581169
 
 
   Benchmark results for warp-optimized GPU kernel for elemwise_add/sub(dense, 
csr):
   ([density%]  [new speedup of write inplace] [old speedup of write inplace])
   ([density%]  [new speedup of write to] [old speedup of write to])
   1.00% 4.3422253664233255  1.1133807433946643
   1.00% 1.8064753025920386  1.1127745540337441
   0.50% 8.719801584535675  1.2989243065699914
   0.50% 2.2845434302137857  1.2954892083078022
   0.10% 51.95314630061374  1.4716730306016637
   0.10% 2.878010179453661  1.4621131161544634
   0.05% 90.41164608500259  1.4950209892594164
   0.05% 2.9590177057354445  1.494533324414405
   0.01% 165.45560871876663  1.5066270652228635
   0.01% 2.9965883337578574  1.4932449464071242
   benchmark script is the same one used in #10550 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu opened a new issue #11339: compiling error when USE_CUDA=1 and USE_CUDNN=0 with make command

2018-06-19 Thread GitBox
wenyangchu opened a new issue #11339: compiling error when USE_CUDA=1  and 
USE_CUDNN=0 with make command
URL: https://github.com/apache/incubator-mxnet/issues/11339
 
 
   ## Description
   compiling error when USE_CUDA=1  and USE_CUDNN=0 with make command
   
   ## Environment info (Required)
   ```
   Ubuntu 16.04 
   
   ```
   ## Build info (Required if built from source)
   Compiler g++/gcc (Ubuntu 5.4.0-6ubuntu1~16.04.5) 5.4.0 20160609
   
   ## Error Message:
   src/operator/nn/convolution.cu(93): error: identifier "param_" is undefined
   src/operator/nn/convolution.cu(171): error: identifier "param_" is undefined
   
   and fix it in the code, get more errors:
   
   build/src/engine/naive_engine.o: In function 
`mxnet::engine::NaiveEngine::~NaiveEngine()':
   
naive_engine.cc:(.text._ZN5mxnet6engine11NaiveEngineD2Ev[_ZN5mxnet6engine11NaiveEngineD5Ev]+0xd35):
 undefined reference to `cudnnDestroy'
   ## Minimum reproducible example
   Use the latest master branch
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   1. checkout latest master branch
   2.  run 
   make -j $(nproc) USE_OPENCV=1 USE_BLAS=openblas USE_CUDA=1 
USE_CUDA_PATH=/usr/local/cuda USE_CUDNN=0
   
   ## What have you tried to solve it?
   
   1.  By changing the following:
   diff --git a/src/operator/nn/convolution.cu b/src/operator/nn/convolution.cu
   index 9f61212..9f573f1 100644
   --- a/src/operator/nn/convolution.cu
   +++ b/src/operator/nn/convolution.cu
   @@ -89,7 +89,7 @@ void ConvolutionCompute(const nnvm::NodeAttrs& attrs,
  const ConvolutionParam& param = nnvm::get(attrs.parsed);
  int dtype = inputs[conv::kData].type_flag_;
   
   -#if CUDNN_MAJOR < 5
   +#if MXNET_USE_CUDNN ==1 && CUDNN_MAJOR < 5
  if (param_.layout.value() != kNCW &&
  param_.layout.value() != kNCHW &&
  param_.layout.value() != kNCDHW) {
   @@ -167,7 +167,7 @@ void ConvolutionGradCompute(const nnvm::NodeAttrs& 
attrs,
  const std::vector _grad = outputs;
  int dtype = out_grad.type_flag_;
   
   -#if CUDNN_MAJOR < 5
   +#if MXNET_USE_CUDNN ==1 && CUDNN_MAJOR < 5
  if (param_.layout.value() != kNCW &&
  param_.layout.value() != kNCHW &&
  param_.layout.value() != kNCDHW) {
   
   I can continue compiling but get another error below:
   
   build/src/engine/naive_engine.o: In function 
`mxnet::engine::NaiveEngine::~NaiveEngine()':
   
naive_engine.cc:(.text._ZN5mxnet6engine11NaiveEngineD2Ev[_ZN5mxnet6engine11NaiveEngineD5Ev]+0xd35):
 undefined reference to `cudnnDestroy'
   
naive_engine.cc:(.text._ZN5mxnet6engine11NaiveEngineD2Ev[_ZN5mxnet6engine11NaiveEngineD5Ev]+0x1ce4):
 undefined reference to `cudnnGetErrorString'
   build/src/engine/naive_engine.o: In function `void 
mshadow::DeleteStream(mshadow::Stream*)':
   
naive_engine.cc:(.text._ZN7mshadow12DeleteStreamINS_3gpuEEEvPNS_6StreamIT_EE[_ZN7mshadow12DeleteStreamINS_3gpuEEEvPNS_6StreamIT_EE]+0x12a):
 undefined reference to `cudnnDestroy'
   
   
   The cudnn flag is set mshadow I suppose but I haven't found out what makes 
the cudnn flag is true
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-19 Thread GitBox
marcoabreu commented on issue #10827: [MXNET-405][WIP] Add 2 new pipelines to 
the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#issuecomment-398578767
 
 
   P3.8xlarge is now available with label 'mxnetlinux-gpu-p3-8xlarge'


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #10212: MXNet.io website install guide does not have a c++ section

2018-06-19 Thread GitBox
aaronmarkham commented on issue #10212: MXNet.io website install guide does not 
have a c++ section
URL: 
https://github.com/apache/incubator-mxnet/issues/10212#issuecomment-398578428
 
 
   @ThomasDelteil I think you can close this now as the C++ install is added.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenboown commented on issue #3441: Install mxnet into anaconda's python

2018-06-19 Thread GitBox
wenboown commented on issue #3441: Install mxnet into anaconda's python
URL: 
https://github.com/apache/incubator-mxnet/issues/3441#issuecomment-398577975
 
 
   I figured out how to build `mxnet` from source for `conda` for macOS 10.12 
and 10.13. Here's the step-by-step guide:
   
[https://boknowsit.wordpress.com/2018/05/30/setting-up-mxnet-on-macos-with-conda/](url)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenboown removed a comment on issue #3441: Install mxnet into anaconda's python

2018-06-19 Thread GitBox
wenboown removed a comment on issue #3441: Install mxnet into anaconda's python
URL: 
https://github.com/apache/incubator-mxnet/issues/3441#issuecomment-398577623
 
 
   I figured out a way to install mxnet in conda. I recorded the steps here:
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenboown commented on issue #3441: Install mxnet into anaconda's python

2018-06-19 Thread GitBox
wenboown commented on issue #3441: Install mxnet into anaconda's python
URL: 
https://github.com/apache/incubator-mxnet/issues/3441#issuecomment-398577623
 
 
   I figured out a way to install mxnet in conda. I recorded the steps here:
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] asitstands commented on issue #10951: [MXNET-545] Fix broken cython build

2018-06-19 Thread GitBox
asitstands commented on issue #10951: [MXNET-545] Fix broken cython build
URL: https://github.com/apache/incubator-mxnet/pull/10951#issuecomment-398577395
 
 
   I removed the use of the cython modules from the Cent OS environments as 
@marcoabreu requested. Now "CPU: Openblas" and "GPU: Cuda 9.1" enviroments 
build the cython modules, and "Python 2: GPU" and "Python 3: CPU" use the 
cython modules to run the tests. They also run `check_cython` in the above 
comment to see that the cython modules are actually used.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196608187
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -2451,7 +2451,8 @@ def var(name, attr=None, shape=None, lr_mult=None, 
wd_mult=None, dtype=None,
 handle = SymbolHandle()
 check_call(_LIB.MXSymbolCreateVariable(c_str(name), ctypes.byref(handle)))
 ret = Symbol(handle)
-attr = AttrScope._current.value.get(attr)
+with AttrScope():
 
 Review comment:
   fair point, I just thought it was cleaner that way and had better separation 
of concerns. I will update.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #11333: [WIP] fix issue of test_gru_bidirectional #11219 and add robust code

2018-06-19 Thread GitBox
marcoabreu commented on issue #11333: [WIP] fix issue of test_gru_bidirectional 
#11219 and add robust code
URL: https://github.com/apache/incubator-mxnet/pull/11333#issuecomment-398572404
 
 
   Hello,
   yes, please don't use our CI for these kind of things but do it locally. 
Please see 
https://cwiki.apache.org/confluence/display/MXNET/Reproducing+test+results#Reproducingtestresults-Repeatingtestexecution
 for details.
   
   Correct, this timeout was our CI terminating the job because it was running 
too long.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 opened a new pull request #11338: [MXNET-11241] Avoid use of troublesome cudnnFind() results when grad_req='add'

2018-06-19 Thread GitBox
DickJC123 opened a new pull request #11338: [MXNET-11241] Avoid use of 
troublesome cudnnFind() results when grad_req='add'
URL: https://github.com/apache/incubator-mxnet/pull/11338
 
 
   ## Description ##
   The problem of issue #11241 arises because cudnnFind() measures convolution 
algorithm runtimes with an assumed "output blending parameter" beta of 0.  
However, algorithms may have specialized kernels for the beta==0 case, 
different and faster than the generalized beta kernels.  Should the generalized 
kernels have issues with the problem size different than the beta==0 kernels, 
then the algos returned by cudnnFind() might fail when invoked with beta==1 (as 
it is when the convolution op grad_req='add' argument is present).
   
   The demonstrated problem area involves a large 'c' value of 64K, where for 
the backprop-to-filter kernel only algo 1 handles the beta==1 case.  
CudnnFind() was shown to occasionally return algos 0 or 3 as fastest, and both 
of these return error 8 "execution failed" when run.
   
   The fix is based on the observation that cudnnGet() returns algo 1 for the 
backprop-to-filter kernel for the troublesome problem sizes.  Thus,  the fix is 
to avoid cudnnFind() when grad_req='add' and force use of cudnnGet() instead.  
The fix maintains the effectiveness of the caching of algo lookups and 
convolution op instances, so neither cudnnFind() nor cudnnGet() is called 
repeatedly.  Deconvolution was similarly updated with this fix.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [X ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [X ] Changes are complete (i.e. I finished coding on this PR) [1st commit 
includes test, 2nd the fix]
   - [X ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [X ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [X ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wenyangchu commented on issue #11247: Add seed_aug parameter for ImageRecordItr to fix random seed for default augmentation

2018-06-19 Thread GitBox
wenyangchu commented on issue #11247: Add seed_aug parameter for ImageRecordItr 
to fix random seed for default augmentation
URL: https://github.com/apache/incubator-mxnet/pull/11247#issuecomment-398569844
 
 
   I think I have flaky test again


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] vrakesh commented on issue #11334: Cannot initializing parameters of SymbolBlock.

2018-06-19 Thread GitBox
vrakesh commented on issue #11334: Cannot initializing parameters of 
SymbolBlock.
URL: 
https://github.com/apache/incubator-mxnet/issues/11334#issuecomment-398566921
 
 
   Thank you for reporting the issue , @kice , @sandeep-krishnamurthy , 
requesting to tag this under Gluon


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Add standard ResNet data augmentation for ImageRecordIter (#11027)

2018-06-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new ccee176  Add standard ResNet data augmentation for ImageRecordIter 
(#11027)
ccee176 is described below

commit ccee17672b23fa864f5c2e67d6bcea5ccff2979e
Author: Tong He 
AuthorDate: Tue Jun 19 15:23:19 2018 -0700

Add standard ResNet data augmentation for ImageRecordIter (#11027)

* add resnet augmentation

* add test

* fix scope

* fix warning

* fix lint

* fix lint

* add color jitter and pca noise

* fix center crop

* merge

* fix lint

* Trigger CI

* fix

* fix augmentation implementation

* add checks for parameters

* modify training script

* fix compile error

* Trigger CI

* Trigger CI

* modify error message

* Trigger CI

* Trigger CI

* Trigger CI

* improve script in example

* fix script

* clear code

* Trigger CI

* set min_aspect_ratio to optional, move rotation and pad before random 
resized crop

* fix

* Trigger CI

* Trigger CI

* Trigger CI

* fix default values

* Trigger CI
---
 example/image-classification/common/data.py|  48 +++--
 example/image-classification/train_imagenet.py |   4 +-
 src/io/image_aug_default.cc| 241 +++--
 tests/python/train/test_resnet_aug.py  | 173 ++
 4 files changed, 435 insertions(+), 31 deletions(-)

diff --git a/example/image-classification/common/data.py 
b/example/image-classification/common/data.py
index 05f5ddc..bfaadb3 100755
--- a/example/image-classification/common/data.py
+++ b/example/image-classification/common/data.py
@@ -43,9 +43,9 @@ def add_data_args(parser):
 def add_data_aug_args(parser):
 aug = parser.add_argument_group(
 'Image augmentations', 'implemented in src/io/image_aug_default.cc')
-aug.add_argument('--random-crop', type=int, default=1,
+aug.add_argument('--random-crop', type=int, default=0,
  help='if or not randomly crop the image')
-aug.add_argument('--random-mirror', type=int, default=1,
+aug.add_argument('--random-mirror', type=int, default=0,
  help='if or not randomly flip horizontally')
 aug.add_argument('--max-random-h', type=int, default=0,
  help='max change of hue, whose range is [0, 180]')
@@ -53,8 +53,13 @@ def add_data_aug_args(parser):
  help='max change of saturation, whose range is [0, 255]')
 aug.add_argument('--max-random-l', type=int, default=0,
  help='max change of intensity, whose range is [0, 255]')
+aug.add_argument('--min-random-aspect-ratio', type=float, default=None,
+ help='min value of aspect ratio, whose value is either 
None or a positive value.')
 aug.add_argument('--max-random-aspect-ratio', type=float, default=0,
- help='max change of aspect ratio, whose range is [0, 1]')
+ help='max value of aspect ratio. If 
min_random_aspect_ratio is None, '
+  'the aspect ratio range is 
[1-max_random_aspect_ratio, '
+  '1+max_random_aspect_ratio], otherwise it is '
+  '[min_random_aspect_ratio, 
max_random_aspect_ratio].')
 aug.add_argument('--max-random-rotate-angle', type=int, default=0,
  help='max angle to rotate, whose range is [0, 360]')
 aug.add_argument('--max-random-shear-ratio', type=float, default=0,
@@ -63,16 +68,28 @@ def add_data_aug_args(parser):
  help='max ratio to scale')
 aug.add_argument('--min-random-scale', type=float, default=1,
  help='min ratio to scale, should >= img_size/input_shape. 
otherwise use --pad-size')
+aug.add_argument('--max-random-area', type=float, default=1,
+ help='max area to crop in random resized crop, whose 
range is [0, 1]')
+aug.add_argument('--min-random-area', type=float, default=1,
+ help='min area to crop in random resized crop, whose 
range is [0, 1]')
+aug.add_argument('--brightness', type=float, default=0,
+ help='brightness jittering, whose range is [0, 1]')
+aug.add_argument('--contrast', type=float, default=0,
+ help='contrast jittering, whose range is [0, 1]')
+aug.add_argument('--saturation', type=float, default=0,
+ help='saturation jittering, whose range is [0, 1]')
+aug.add_argument('--pca-noise', type=float, default=0,
+ help='pca noise, whose range is [0, 1]')
+

[GitHub] piiswrong closed pull request #11027: Add standard ResNet data augmentation for ImageRecordIter

2018-06-19 Thread GitBox
piiswrong closed pull request #11027: Add standard ResNet data augmentation for 
ImageRecordIter
URL: https://github.com/apache/incubator-mxnet/pull/11027
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/example/image-classification/common/data.py 
b/example/image-classification/common/data.py
index 05f5ddc4506..bfaadb3ff6b 100755
--- a/example/image-classification/common/data.py
+++ b/example/image-classification/common/data.py
@@ -43,9 +43,9 @@ def add_data_args(parser):
 def add_data_aug_args(parser):
 aug = parser.add_argument_group(
 'Image augmentations', 'implemented in src/io/image_aug_default.cc')
-aug.add_argument('--random-crop', type=int, default=1,
+aug.add_argument('--random-crop', type=int, default=0,
  help='if or not randomly crop the image')
-aug.add_argument('--random-mirror', type=int, default=1,
+aug.add_argument('--random-mirror', type=int, default=0,
  help='if or not randomly flip horizontally')
 aug.add_argument('--max-random-h', type=int, default=0,
  help='max change of hue, whose range is [0, 180]')
@@ -53,8 +53,13 @@ def add_data_aug_args(parser):
  help='max change of saturation, whose range is [0, 255]')
 aug.add_argument('--max-random-l', type=int, default=0,
  help='max change of intensity, whose range is [0, 255]')
+aug.add_argument('--min-random-aspect-ratio', type=float, default=None,
+ help='min value of aspect ratio, whose value is either 
None or a positive value.')
 aug.add_argument('--max-random-aspect-ratio', type=float, default=0,
- help='max change of aspect ratio, whose range is [0, 1]')
+ help='max value of aspect ratio. If 
min_random_aspect_ratio is None, '
+  'the aspect ratio range is 
[1-max_random_aspect_ratio, '
+  '1+max_random_aspect_ratio], otherwise it is '
+  '[min_random_aspect_ratio, 
max_random_aspect_ratio].')
 aug.add_argument('--max-random-rotate-angle', type=int, default=0,
  help='max angle to rotate, whose range is [0, 360]')
 aug.add_argument('--max-random-shear-ratio', type=float, default=0,
@@ -63,16 +68,28 @@ def add_data_aug_args(parser):
  help='max ratio to scale')
 aug.add_argument('--min-random-scale', type=float, default=1,
  help='min ratio to scale, should >= img_size/input_shape. 
otherwise use --pad-size')
+aug.add_argument('--max-random-area', type=float, default=1,
+ help='max area to crop in random resized crop, whose 
range is [0, 1]')
+aug.add_argument('--min-random-area', type=float, default=1,
+ help='min area to crop in random resized crop, whose 
range is [0, 1]')
+aug.add_argument('--brightness', type=float, default=0,
+ help='brightness jittering, whose range is [0, 1]')
+aug.add_argument('--contrast', type=float, default=0,
+ help='contrast jittering, whose range is [0, 1]')
+aug.add_argument('--saturation', type=float, default=0,
+ help='saturation jittering, whose range is [0, 1]')
+aug.add_argument('--pca-noise', type=float, default=0,
+ help='pca noise, whose range is [0, 1]')
+aug.add_argument('--random-resized-crop', type=int, default=0,
+ help='whether to use random resized crop')
 return aug
 
-def set_data_aug_level(aug, level):
-if level >= 1:
-aug.set_defaults(random_crop=1, random_mirror=1)
-if level >= 2:
-aug.set_defaults(max_random_h=36, max_random_s=50, max_random_l=50)
-if level >= 3:
-aug.set_defaults(max_random_rotate_angle=10, 
max_random_shear_ratio=0.1, max_random_aspect_ratio=0.25)
-
+def set_resnet_aug(aug):
+# standard data augmentation setting for resnet training
+aug.set_defaults(random_crop=1, random_resized_crop=1)
+aug.set_defaults(min_random_area=0.08)
+aug.set_defaults(max_random_aspect_ratio=4./3., 
min_random_aspect_ratio=3./4.)
+aug.set_defaults(brightness=0.4, contrast=0.4, saturation=0.4, 
pca_noise=0.1)
 
 class SyntheticDataIter(DataIter):
 def __init__(self, num_classes, data_shape, max_iter, dtype):
@@ -135,8 +152,16 @@ def get_rec_iter(args, kv=None):
 max_random_scale= args.max_random_scale,
 pad = args.pad_size,
 fill_value  = 127,
+random_resized_crop = args.random_resized_crop,
 min_random_scale= args.min_random_scale,
 max_aspect_ratio= args.max_random_aspect_ratio,
+

[GitHub] lihaofd commented on issue #11333: [WIP] fix issue of test_gru_bidirectional #11219 and add robust code

2018-06-19 Thread GitBox
lihaofd commented on issue #11333: [WIP] fix issue of test_gru_bidirectional 
#11219 and add robust code
URL: https://github.com/apache/incubator-mxnet/pull/11333#issuecomment-398562393
 
 
   @marcoabreu 
   When testing 1000 times test_gru_bidirectional in one nosetests, jenkins 
testing failed with "Sending interrupt signal to process". While  testing 100 
times test_gru_bidirectional can pass.
   Is it caused by timeout? If this is the case, how many times testing could 
be suggested in this testing?
   Thanks!
   
   "test_operator.test_loop_gru_bidirectional ... Sending interrupt signal to 
process
   
   After 10s process did not stop
   or
   
C:\jenkins_slave\workspace\ut-python-cpu@2\pkg_vc14_cpu\python\mxnet\rnn\rnn_cell.py:675:
 UserWarning: NTC layout detected. Consider using TNC for FusedRNNCell for 
faster speed
   
 warnings.warn("NTC layout detected. Consider using "
   
   Sending interrupt signal to process
   
   After 10s process did not stop


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196591347
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -2451,7 +2451,8 @@ def var(name, attr=None, shape=None, lr_mult=None, 
wd_mult=None, dtype=None,
 handle = SymbolHandle()
 check_call(_LIB.MXSymbolCreateVariable(c_str(name), ctypes.byref(handle)))
 ret = Symbol(handle)
-attr = AttrScope._current.value.get(attr)
+with AttrScope():
 
 Review comment:
   we can avoid a dict copy this way.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196589704
 
 

 ##
 File path: tests/python/unittest/test_thread_local.py
 ##
 @@ -133,6 +133,19 @@ def f():
 thread.join()
 event.clear()
 assert status[0], "Spawned thread isn't using the correct blockscope 
namemanager"
+
+def test_createblock():
+status = [False]
+def f():
+net = mx.gluon.nn.Dense(2)
+net.initialize()
+net(mx.nd.array([1, 2, 3]))
+status[0] = True
+
+thread = threading.Thread(target=f)
+thread.start()
+thread.join()
+assert status[0], "Failed to create a layer within a thread"
 
 Review comment:
   Will do :+1: 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
ThomasDelteil commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196588693
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -2451,7 +2451,8 @@ def var(name, attr=None, shape=None, lr_mult=None, 
wd_mult=None, dtype=None,
 handle = SymbolHandle()
 check_call(_LIB.MXSymbolCreateVariable(c_str(name), ctypes.byref(handle)))
 ret = Symbol(handle)
-attr = AttrScope._current.value.get(attr)
+with AttrScope():
 
 Review comment:
   That logic is already included in the context 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196583018
 
 

 ##
 File path: python/mxnet/symbol/symbol.py
 ##
 @@ -2451,7 +2451,8 @@ def var(name, attr=None, shape=None, lr_mult=None, 
wd_mult=None, dtype=None,
 handle = SymbolHandle()
 check_call(_LIB.MXSymbolCreateVariable(c_str(name), ctypes.byref(handle)))
 ret = Symbol(handle)
-attr = AttrScope._current.value.get(attr)
+with AttrScope():
 
 Review comment:
   we can include above `not hasattr` logic here too.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 'AttributeError: '_thread._local' object has no attribute 'value'' on distributed processing applications

2018-06-19 Thread GitBox
anirudh2290 commented on a change in pull request #11332: [MXNET-558] Fix 
'AttributeError: '_thread._local' object has no attribute 'value'' on 
distributed processing applications
URL: https://github.com/apache/incubator-mxnet/pull/11332#discussion_r196586618
 
 

 ##
 File path: tests/python/unittest/test_thread_local.py
 ##
 @@ -133,6 +133,19 @@ def f():
 thread.join()
 event.clear()
 assert status[0], "Spawned thread isn't using the correct blockscope 
namemanager"
+
+def test_createblock():
+status = [False]
+def f():
+net = mx.gluon.nn.Dense(2)
+net.initialize()
+net(mx.nd.array([1, 2, 3]))
+status[0] = True
+
+thread = threading.Thread(target=f)
+thread.start()
+thread.join()
+assert status[0], "Failed to create a layer within a thread"
 
 Review comment:
   Can you also add the test for running the following functions inside a 
thread:
   
   ```
 def g():
 data = mx.sym.Variable('data', attr={'a': 'b'})
   ```
   
   ```
 def f():
 a = mx.sym.var("a")
 b = mx.sym.var("b")
 a_ = mx.nd.ones((2, 2))
 c_ = a_.copy()
 func1 = (a + b).bind(mx.cpu(), args={'a': a_, 'b': c_})
 func1.forward()[0].wait_to_read()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] jessebrizzi commented on issue #10867: Scala Module API resize is leaking memory on the native size.

2018-06-19 Thread GitBox
jessebrizzi commented on issue #10867: Scala Module API resize is leaking 
memory on the native size. 
URL: 
https://github.com/apache/incubator-mxnet/issues/10867#issuecomment-398542960
 
 
   @lupesko I updated the example code to use the mxnet 1.2.0 package released 
on Maven and reran the test and the memory leak is still present. 
   
   I have also put together a docker container that you can use with 
nvidia-docker to run my example code with sbt/scala/java/cudnn/cuda all 
installed here https://hub.docker.com/r/jessebrizzi/dl-dev/ to control for 
environment differences. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 new pipelines to the Official CI and run nightly tests.

2018-06-19 Thread GitBox
marcoabreu commented on a change in pull request #10827: [MXNET-405][WIP] Add 2 
new pipelines to the Official CI and run nightly tests. 
URL: https://github.com/apache/incubator-mxnet/pull/10827#discussion_r196537084
 
 

 ##
 File path: docs/install/index.md
 ##
 @@ -84,7 +84,7 @@ $ wget https://bootstrap.pypa.io/get-pip.py && sudo python 
get-pip.py
 **Step 2** Install MXNet with OpenBLAS acceleration.
 
 ```bash
-$ pip install mxnet
+$ sudo pip install mxnet
 
 Review comment:
   Ye the nose-discussion is a different one, just wanted it for reference. But 
I agree - it should not be necessary to run pip as sudo since we don't expect 
our users to do it either.
   
   Fully agree.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #11299: row_sparse_pull, push row_sparse gradient is too slow, it has 10+ times difference

2018-06-19 Thread GitBox
eric-haibin-lin commented on issue #11299: row_sparse_pull,push row_sparse 
gradient is too slow,it has 10+ times difference
URL: 
https://github.com/apache/incubator-mxnet/issues/11299#issuecomment-398503457
 
 
   @coldsheephot are you working on multi-device or multi-machine case? I have 
plan to extend it for multi-device mode 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on issue #11304: Added Learning Rate Finder tutorial

2018-06-19 Thread GitBox
thomelane commented on issue #11304: Added Learning Rate Finder tutorial
URL: https://github.com/apache/incubator-mxnet/pull/11304#issuecomment-398503357
 
 
   @Ishitori @ThomasDelteil could you review when you get a chance, thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #11285: Crash while running gluon image-classification.py example with float16

2018-06-19 Thread GitBox
eric-haibin-lin commented on issue #11285: Crash while running gluon 
image-classification.py example with float16
URL: 
https://github.com/apache/incubator-mxnet/issues/11285#issuecomment-398503015
 
 
   @rahul003 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil edited a comment on issue #11266: [MXNET-514] Add clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer init_kvstore

2018-06-19 Thread GitBox
ThomasDelteil edited a comment on issue #11266: [MXNET-514] Add 
clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer 
init_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11266#issuecomment-398500396
 
 
   @eric-haibin-lin I have this test failing in my build on windows, is it 
flaky or an actual failure you reckon?
   test_trainer_reset_kv
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-11332/4/pipeline/752


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #11266: [MXNET-514] Add clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer init_kvstore

2018-06-19 Thread GitBox
ThomasDelteil commented on issue #11266: [MXNET-514] Add 
clip_global_norm(row_sparse_grad). Fix row_sparse_param.save(). Fix trainer 
init_kvstore
URL: https://github.com/apache/incubator-mxnet/pull/11266#issuecomment-398500396
 
 
   @eric-haibin-lin I have this test failing in my build, is it flaky or an 
actual failure you reckon?
   test_trainer_reset_kv
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] itsergiu edited a comment on issue #621: Support for other Device Types, OpenCL AMD GPU

2018-06-19 Thread GitBox
itsergiu edited a comment on issue #621: Support for other Device Types, OpenCL 
AMD GPU
URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-398493949
 
 
   Do you already provide an installation kit for AMD GPU RX550?
   Does it work with Windows 10?
   Does it work with Jupyter, Anaconda and Keras on top of Tensorflow?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] itsergiu commented on issue #621: Support for other Device Types, OpenCL AMD GPU

2018-06-19 Thread GitBox
itsergiu commented on issue #621: Support for other Device Types, OpenCL AMD GPU
URL: https://github.com/apache/incubator-mxnet/issues/621#issuecomment-398493949
 
 
   Do you already provide an installation kit for AMD GPU RX550?
   Does it work wih Windows 10?
   Does it work with Jupyter, Anaconda and Keras on top of Tensorflow?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >