[GitHub] liangfu commented on issue #10595: Update mobilenetv2 symbol definition

2018-04-19 Thread GitBox
liangfu commented on issue #10595: Update mobilenetv2 symbol definition
URL: https://github.com/apache/incubator-mxnet/pull/10595#issuecomment-382631637
 
 
   changes to class and function names have been made according to the 
suggestions.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10599: [MKLDNN Bug] MKLDNN eats lots of memory and then crash down.

2018-04-19 Thread GitBox
zheng-da commented on issue #10599: [MKLDNN Bug] MKLDNN eats lots of memory and 
then crash down.
URL: 
https://github.com/apache/incubator-mxnet/issues/10599#issuecomment-382637204
 
 
   is the memory used by the temp space? i just learned that there might be 
multiple pieces of temp space. we might need to limit the number of temp space.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN support for model quantization

2018-04-19 Thread GitBox
wentingj commented on a change in pull request #10433: [MXNET-290] MKLDNN 
support for model quantization
URL: https://github.com/apache/incubator-mxnet/pull/10433#discussion_r182657549
 
 

 ##
 File path: src/operator/quantization/quantize_graph_pass.cc
 ##
 @@ -198,7 +198,7 @@ Graph QuantizeGraph(Graph &&src) {
 NodePtr mirror_node = mirror_map.at(e.node.get());
 NodeEntry mirror_entry = NodeEntry{
   mirror_node, e.index, e.version};
-size_t num_outputs = e.node->num_outputs();
+size_t num_outputs = mirror_node->num_outputs() - 2;
 
 Review comment:
   when mkldnn is enabled, fp32 pooling will have two outputs, one is for 
workspace, so num_outputs cannot set by fp32 op node when mkldnn enabled.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xiadeye commented on issue #1754: Issue in amalgamation for Android

2018-04-19 Thread GitBox
xiadeye commented on issue #1754: Issue in amalgamation for Android
URL: 
https://github.com/apache/incubator-mxnet/issues/1754#issuecomment-382670518
 
 
   what does '${CXX} ${CFLAGS} -fPIC -o $@ -c jni/predictor.cc 
--sysroot=${SYS_ROOT} -I ${INCLUDE}' mean?Is it added in makefile?But I get 
error:
   'Makefile:7: *** missing separator'


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686000
 
 

 ##
 File path: tests/tutorials/test_sanity_tutorials.py
 ##
 @@ -0,0 +1,81 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import glob
+import os
+import re
+
+# White list of non-downloadable tutorials
 
 Review comment:
   Could you elaborate what non-downloadable means in this context?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182685046
 
 

 ##
 File path: ci/docker/install/ubuntu_tutorials.sh
 ##
 @@ -0,0 +1,26 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# build and install are separated so changes to build don't invalidate
+# the whole docker cache for the image
+
+set -ex
+apt-get install graphviz python-opencv
+pip2 install jupyter matplotlib Pillow opencv-python scipy scikit-learn 
h5py==2.8.0rc1 graphviz
+pip3 install jupyter matplotlib Pillow opencv-python scipy scikit-learn 
h5py==2.8.0rc1 graphviz
 
 Review comment:
   most of these dependencies are already installed in the python script. 
Please make sure to remove duplicates


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182684619
 
 

 ##
 File path: ci/build.py
 ##
 @@ -157,6 +163,11 @@ def script_name() -> str:
 help="Use nvidia docker",
 action='store_true')
 
+parser.add_argument("--shm-size",
+help="Size of the shared memory allocated for the 
container (e.g '1g')",
 
 Review comment:
   Could you elaborate what this is used for? With whom is the memory being 
shared with?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182685164
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -349,6 +349,7 @@ sanity_check() {
 tools/license_header.py check
 make cpplint rcpplint jnilint
 make pylint
+nosetests-3.4 tests/tutorials/test_sanity_tutorials.py
 
 Review comment:
   nice!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686616
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
 
 Review comment:
   Please document these env-vars


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686851
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
 
 Review comment:
   This filepath makes the tests non-parallelizable. Are you going to introduce 
parallelization at a later point?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686959
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
+nbformat.write(notebook, output_file)
+output_nb = open(output_file, mode='r')
+for line in output_nb:
+if "Warning:" in line:
+errors.append("Warning:\n"+line)
+if len(errors) > 0:
+print('\n'.join(errors))
+return False
+return True
+
+
+
+def test_basic_ndarray():
+   assert _test_tutorial_nb('basic/ndarray')
+
+def test_basic_ndarray_indexing():
+assert _test_tutorial_nb('basic/ndarray_indexing')
+
+def test_basic_symbol():
+assert _test_tutorial_nb('basic/symbol')
+
+def test_basic_module():
+assert _test_tutorial_nb('basic/module')
+
+def test_basic_data():
+assert _test_tutorial_nb('basic/data')
+
+def test_gluon_customop():
+assert _test_tutorial_nb('gluon/customop')
+
+def test_gluon_data_augmentation():
+assert _test_tutorial_nb('gluon/data_augmentation')
+
+def test_gluon_datasets():
+assert True
+# Investigating flakiness with docker
+#assert _test_tutorial_nb('gluon/datasets')
 
 Review comment:
   TODO: Enable


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686527
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
 
 Review comment:
   Please make a constant for that version and document it


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182684832
 
 

 ##
 File path: ci/docker/install/ubuntu_scala.sh
 ##
 @@ -23,9 +23,8 @@
 set -ex
 # install libraries for mxnet's scala package on ubuntu
 apt-get install -y software-properties-common
-add-apt-repository -y ppa:webupd8team/java
 apt-get update
-echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" 
| debconf-set-selections
-apt-get install -y oracle-java8-installer
-apt-get install -y oracle-java8-set-default
-apt-get update && apt-get install -y maven
\ No newline at end of file
+sleep $[ ( $RANDOM % 10 )  + 1 ]s
 
 Review comment:
   Please remove this


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182685641
 
 

 ##
 File path: docs/mxdoc.py
 ##
 @@ -367,7 +367,8 @@ def add_buttons(app, docname, source):
 # source[i] = '\n'.join(lines)
 
 def setup(app):
-app.connect("builder-inited", build_mxnet)
+if os.getenv('MXNET_DOCS_BUILD_MXNET', '1') == '1':
 
 Review comment:
   Please document this env_var somewhere


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182684516
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_gpu
 ##
 @@ -44,8 +44,14 @@ COPY install/ubuntu_llvm.sh /work/
 RUN /work/ubuntu_llvm.sh
 COPY install/ubuntu_caffe.sh /work/
 RUN /work/ubuntu_caffe.sh
+COPY install/ubuntu_onnx.sh /work/
+RUN /work/ubuntu_onnx.sh
 COPY install/ubuntu_adduser.sh /work/
 
 Review comment:
   Please make sure adduser is the last executed script


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182685486
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -385,6 +386,28 @@ unittest_ubuntu_python2_gpu() {
 nosetests-2.7 --verbose tests/python/gpu
 }
 
+tutorialtest_ubuntu_python3_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python3
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
+}
+
+tutorialtest_ubuntu_python2_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python2
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
 
 Review comment:
   -> nosetests-2.7
   
   why no log capture?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182686436
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 
 Review comment:
   Could you document what happens in case of a timeout? We already have a 
timeout mechanism in jenkins in place


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] smartadpole opened a new issue #10611: Reshape network

2018-04-19 Thread GitBox
smartadpole opened a new issue #10611: Reshape network
URL: https://github.com/apache/incubator-mxnet/issues/10611
 
 
   ## Description
   In c/c++ code,
   How to reshape a network created by '''MXPredCreate'''.
   The ```MXPredReshape``` is not exported.
   
   e-mail:smartadp...@163.com
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nihui commented on issue #10578: [MXNET-326] fix filter layout in DepthwiseConv2dBackwardFilterKernel

2018-04-19 Thread GitBox
nihui commented on issue #10578: [MXNET-326] fix filter layout in 
DepthwiseConv2dBackwardFilterKernel
URL: https://github.com/apache/incubator-mxnet/pull/10578#issuecomment-382216816
 
 
   I think it is needed to cherry-pick this commit to v1.2.0 branch, as 
depthwise convolution is widely used, otherwise a regression bug would be in 
the coming stable release. thanks! @marcoabreu  @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10611: Reshape network

2018-04-19 Thread GitBox
chinakook commented on issue #10611: Reshape network
URL: 
https://github.com/apache/incubator-mxnet/issues/10611#issuecomment-382707653
 
 
   https://github.com/apache/incubator-mxnet/pull/10612


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook opened a new pull request #10612: MXPredReshape support Windows

2018-04-19 Thread GitBox
chinakook opened a new pull request #10612: MXPredReshape support Windows
URL: https://github.com/apache/incubator-mxnet/pull/10612
 
 
   ## Description ##
   Add MXPredReshape API MXNET_DLL prefix to support Windows.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel opened a new pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
xinyu-intel opened a new pull request #10613: Add Windows MKLDNN Building 
Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613
 
 
   ## Description ##
   This is a raw instruction for windows users to build mxnet with mkldnn from 
source. The process is quite tedious and still needs some optimization.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   @pengzhao-intel @zheng-da 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory

2018-04-19 Thread GitBox
KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory
URL: 
https://github.com/apache/incubator-mxnet/issues/10532#issuecomment-382761026
 
 
   Ok, for the first question and topic question I have the answer:
   Need to use weightsData.wait_to_read() after weights.set_data(weights.data() 
+ output)
   
   But performance issue is still valid


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10611: Reshape network

2018-04-19 Thread GitBox
spidyDev commented on issue #10611: Reshape network
URL: 
https://github.com/apache/incubator-mxnet/issues/10611#issuecomment-382763513
 
 
   @nswamy  Please tag this as : Question , Feature


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10610: memory increase when use adam and rmsprop.

2018-04-19 Thread GitBox
spidyDev commented on issue #10610: memory increase when use adam and rmsprop.
URL: 
https://github.com/apache/incubator-mxnet/issues/10610#issuecomment-382765582
 
 
   @li-haoran  Please do post your question on https://discuss.mxnet.io/ 
   
   @nswamy : Please tag as: Question, Memory


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10611: Reshape network

2018-04-19 Thread GitBox
spidyDev commented on issue #10611: Reshape network
URL: 
https://github.com/apache/incubator-mxnet/issues/10611#issuecomment-382763513
 
 
   @nswamy  Please tag this as : Question , Feature, C++


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10609: Gluon code fails in the normal mode but succeeds in the hybrid mode.

2018-04-19 Thread GitBox
spidyDev commented on issue #10609: Gluon code fails in the normal mode but 
succeeds in the hybrid mode.
URL: 
https://github.com/apache/incubator-mxnet/issues/10609#issuecomment-382770003
 
 
   @nswamy : Please tag as : question, gluon


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: [MXNET-329] support SparseEmbedding with dense weight (#10585)

2018-04-19 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 3062122  [MXNET-329] support SparseEmbedding with dense weight (#10585)
3062122 is described below

commit 306212289888562617bf1dae1199695daea2b054
Author: Haibin Lin 
AuthorDate: Thu Apr 19 08:04:40 2018 -0700

[MXNET-329] support SparseEmbedding with dense weight (#10585)

* add sparseembedding(dense_weight)

* update test

* Update test_sparse_operator.py
---
 src/operator/tensor/indexing_op.cc|  8 +++-
 src/operator/tensor/indexing_op.h | 12 +---
 tests/python/unittest/test_sparse_operator.py | 20 ++--
 3 files changed, 26 insertions(+), 14 deletions(-)

diff --git a/src/operator/tensor/indexing_op.cc 
b/src/operator/tensor/indexing_op.cc
index bb65419..6f0f468 100644
--- a/src/operator/tensor/indexing_op.cc
+++ b/src/operator/tensor/indexing_op.cc
@@ -263,8 +263,7 @@ All the input values should be integers in the range [0, 
input_dim).
 If the input_dim is ip0 and output_dim is op0, then shape of the embedding 
weight matrix must be
 (ip0, op0).
 
-The storage type of weight must be `row_sparse`, and the gradient of the 
weight will be of
-`row_sparse` storage type, too.
+The storage type of the gradient will be `row_sparse`.
 
 .. Note::
 
@@ -272,9 +271,8 @@ The storage type of weight must be `row_sparse`, and the 
gradient of the weight
 The operator is available on both CPU and GPU.
 When `deterministic` is set to `True`, the accumulation of gradients 
follows a
 deterministic order if a feature appears multiple times in the input. 
However, the
-accumulation is usually slower when the order is enforced.
-When the operator is used in recurrent neural network models on the GPU,
-the recommended value for `deterministic` is `True`.
+accumulation is usually slower when the order is enforced on GPU.
+When the operator is used on the GPU, the recommended value for 
`deterministic` is `True`.
 
 Examples::
 
diff --git a/src/operator/tensor/indexing_op.h 
b/src/operator/tensor/indexing_op.h
index 2d17798..0f65066 100644
--- a/src/operator/tensor/indexing_op.h
+++ b/src/operator/tensor/indexing_op.h
@@ -21,7 +21,7 @@
  * Copyright (c) 2017 by Contributors
  * \file indexing_op.h
  * \brief
- * \author Bing Xu, Siyi Li, Chi Zhang
+ * \author Bing Xu, Siyi Li, Chi Zhang, Haibin Lin
 */
 #ifndef MXNET_OPERATOR_TENSOR_INDEXING_OP_H_
 #define MXNET_OPERATOR_TENSOR_INDEXING_OP_H_
@@ -209,8 +209,8 @@ inline bool SparseEmbeddingOpForwardStorageType(const 
nnvm::NodeAttrs& attrs,
   int& out_stype = out_attrs->at(embedding::kOut);
   bool dispatched = false;
   if (!dispatched && data_stype == kDefaultStorage &&
-  weight_stype == kRowSparseStorage) {
-// dns, rsp -> dns
+  (weight_stype == kRowSparseStorage || weight_stype == kDefaultStorage)) {
+// dns, rsp/dns -> dns
 dispatched = storage_type_assign(&out_stype, kDefaultStorage,
  dispatch_mode, DispatchMode::kFComputeEx);
   }
@@ -423,7 +423,13 @@ void SparseEmbeddingOpForwardEx(const nnvm::NodeAttrs& 
attrs,
   const auto out_stype = out.storage_type();
   if (data_stype == kDefaultStorage && weight_stype == kRowSparseStorage &&
   out_stype == kDefaultStorage) {
+// dns, rsp -> dns
 SparseEmbeddingOpForwardRspImpl(ctx, data.data(), weight, req[0], 
out.data());
+  } else if (data_stype == kDefaultStorage && weight_stype == kDefaultStorage 
&&
+ out_stype == kDefaultStorage) {
+// dns, dns -> dns
+EmbeddingOpForwardDnsImpl(ctx.get_stream(), data.data(), 
weight.data(),
+   req[0], out.data());
   } else {
 LogUnimplementedOp(attrs, ctx, inputs, req, outputs);
   }
diff --git a/tests/python/unittest/test_sparse_operator.py 
b/tests/python/unittest/test_sparse_operator.py
index 3479486..31f2e49 100644
--- a/tests/python/unittest/test_sparse_operator.py
+++ b/tests/python/unittest/test_sparse_operator.py
@@ -1638,10 +1638,10 @@ def test_sparse_elementwise_sum():
 @with_seed()
 def test_sparse_embedding():
 ''' test sparse embedding operator '''
-def check_sparse_embedding(in_dim, out_dim, batch, densities, 
deterministic):
+def check_sparse_embedding(in_dim, out_dim, batch, densities, 
deterministic, weight_stype):
 # init executor
 data = mx.sym.Variable("data")
-weight = mx.sym.Variable("embed_weight", stype='row_sparse')
+weight = mx.sym.Variable("embed_weight", stype=weight_stype)
 embed = mx.sym.contrib.SparseEmbedding(data=data, weight=weight, 
input_dim=in_dim,
output_dim=out_dim, 
deterministic=deterministic,
name="embed"

[GitHub] eric-haibin-lin closed pull request #10585: [MXNET-329] support SparseEmbedding with dense weight

2018-04-19 Thread GitBox
eric-haibin-lin closed pull request #10585: [MXNET-329] support SparseEmbedding 
with dense weight
URL: https://github.com/apache/incubator-mxnet/pull/10585
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/indexing_op.cc 
b/src/operator/tensor/indexing_op.cc
index bb65419a79c..6f0f468998b 100644
--- a/src/operator/tensor/indexing_op.cc
+++ b/src/operator/tensor/indexing_op.cc
@@ -263,8 +263,7 @@ All the input values should be integers in the range [0, 
input_dim).
 If the input_dim is ip0 and output_dim is op0, then shape of the embedding 
weight matrix must be
 (ip0, op0).
 
-The storage type of weight must be `row_sparse`, and the gradient of the 
weight will be of
-`row_sparse` storage type, too.
+The storage type of the gradient will be `row_sparse`.
 
 .. Note::
 
@@ -272,9 +271,8 @@ The storage type of weight must be `row_sparse`, and the 
gradient of the weight
 The operator is available on both CPU and GPU.
 When `deterministic` is set to `True`, the accumulation of gradients 
follows a
 deterministic order if a feature appears multiple times in the input. 
However, the
-accumulation is usually slower when the order is enforced.
-When the operator is used in recurrent neural network models on the GPU,
-the recommended value for `deterministic` is `True`.
+accumulation is usually slower when the order is enforced on GPU.
+When the operator is used on the GPU, the recommended value for 
`deterministic` is `True`.
 
 Examples::
 
diff --git a/src/operator/tensor/indexing_op.h 
b/src/operator/tensor/indexing_op.h
index 2d17798c346..0f6506640a4 100644
--- a/src/operator/tensor/indexing_op.h
+++ b/src/operator/tensor/indexing_op.h
@@ -21,7 +21,7 @@
  * Copyright (c) 2017 by Contributors
  * \file indexing_op.h
  * \brief
- * \author Bing Xu, Siyi Li, Chi Zhang
+ * \author Bing Xu, Siyi Li, Chi Zhang, Haibin Lin
 */
 #ifndef MXNET_OPERATOR_TENSOR_INDEXING_OP_H_
 #define MXNET_OPERATOR_TENSOR_INDEXING_OP_H_
@@ -209,8 +209,8 @@ inline bool SparseEmbeddingOpForwardStorageType(const 
nnvm::NodeAttrs& attrs,
   int& out_stype = out_attrs->at(embedding::kOut);
   bool dispatched = false;
   if (!dispatched && data_stype == kDefaultStorage &&
-  weight_stype == kRowSparseStorage) {
-// dns, rsp -> dns
+  (weight_stype == kRowSparseStorage || weight_stype == kDefaultStorage)) {
+// dns, rsp/dns -> dns
 dispatched = storage_type_assign(&out_stype, kDefaultStorage,
  dispatch_mode, DispatchMode::kFComputeEx);
   }
@@ -423,7 +423,13 @@ void SparseEmbeddingOpForwardEx(const nnvm::NodeAttrs& 
attrs,
   const auto out_stype = out.storage_type();
   if (data_stype == kDefaultStorage && weight_stype == kRowSparseStorage &&
   out_stype == kDefaultStorage) {
+// dns, rsp -> dns
 SparseEmbeddingOpForwardRspImpl(ctx, data.data(), weight, req[0], 
out.data());
+  } else if (data_stype == kDefaultStorage && weight_stype == kDefaultStorage 
&&
+ out_stype == kDefaultStorage) {
+// dns, dns -> dns
+EmbeddingOpForwardDnsImpl(ctx.get_stream(), data.data(), 
weight.data(),
+   req[0], out.data());
   } else {
 LogUnimplementedOp(attrs, ctx, inputs, req, outputs);
   }
diff --git a/tests/python/unittest/test_sparse_operator.py 
b/tests/python/unittest/test_sparse_operator.py
index 34794866546..31f2e494f51 100644
--- a/tests/python/unittest/test_sparse_operator.py
+++ b/tests/python/unittest/test_sparse_operator.py
@@ -1638,10 +1638,10 @@ def check_sparse_elementwise_sum_with_shape(stype, 
shape, n):
 @with_seed()
 def test_sparse_embedding():
 ''' test sparse embedding operator '''
-def check_sparse_embedding(in_dim, out_dim, batch, densities, 
deterministic):
+def check_sparse_embedding(in_dim, out_dim, batch, densities, 
deterministic, weight_stype):
 # init executor
 data = mx.sym.Variable("data")
-weight = mx.sym.Variable("embed_weight", stype='row_sparse')
+weight = mx.sym.Variable("embed_weight", stype=weight_stype)
 embed = mx.sym.contrib.SparseEmbedding(data=data, weight=weight, 
input_dim=in_dim,
output_dim=out_dim, 
deterministic=deterministic,
name="embed")
@@ -1662,21 +1662,29 @@ def check_sparse_embedding(in_dim, out_dim, batch, 
densities, deterministic):
 weight = arg_map["embed_weight"]
 for density in densities:
 # update weight based on density
-weight[:] = rand_ndarray(weight.shape, 'row_sparse', 
density=density)
+weight[:] = rand_ndarray(weight.shape, weight_stype, 
density=density)
 # check fo

[GitHub] spidyDev commented on issue #10602: mxnet.model.server custom Service model.get_outputs() returns only one result

2018-04-19 Thread GitBox
spidyDev commented on issue #10602:  mxnet.model.server custom Service 
model.get_outputs() returns only one result
URL: 
https://github.com/apache/incubator-mxnet/issues/10602#issuecomment-382773235
 
 
   @crazyleg  Please post your question on 
https://github.com/awslabs/mxnet-model-server/issues
   to get better response for this issue.
   
   @reminisce  Please tag : Question


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182782147
 
 

 ##
 File path: ci/docker/Dockerfile.build.ubuntu_gpu
 ##
 @@ -44,8 +44,14 @@ COPY install/ubuntu_llvm.sh /work/
 RUN /work/ubuntu_llvm.sh
 COPY install/ubuntu_caffe.sh /work/
 RUN /work/ubuntu_caffe.sh
+COPY install/ubuntu_onnx.sh /work/
+RUN /work/ubuntu_onnx.sh
 COPY install/ubuntu_adduser.sh /work/
 
 Review comment:
   will update


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10601: load parameters error with the same code exported by HybridBlock

2018-04-19 Thread GitBox
spidyDev commented on issue #10601: load parameters error  with the same code 
exported by HybridBlock
URL: 
https://github.com/apache/incubator-mxnet/issues/10601#issuecomment-382774430
 
 
   @reminisce : Please label this: gluon, bug


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182783772
 
 

 ##
 File path: ci/build.py
 ##
 @@ -157,6 +163,11 @@ def script_name() -> str:
 help="Use nvidia docker",
 action='store_true')
 
+parser.add_argument("--shm-size",
+help="Size of the shared memory allocated for the 
container (e.g '1g')",
 
 Review comment:
   That's the size of the /dev/shm inside the container, which is used for 
interprocess communication for `DataLoader`s. By default it was only 50m, which 
causes hang ups when used for real-world example like in the tutorials, with 32 
workers. https://docs.docker.com/engine/reference/commandline/run/
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182784149
 
 

 ##
 File path: ci/docker/install/ubuntu_scala.sh
 ##
 @@ -23,9 +23,8 @@
 set -ex
 # install libraries for mxnet's scala package on ubuntu
 apt-get install -y software-properties-common
-add-apt-repository -y ppa:webupd8team/java
 apt-get update
-echo "oracle-java8-installer shared/accepted-oracle-license-v1-1 select true" 
| debconf-set-selections
-apt-get install -y oracle-java8-installer
-apt-get install -y oracle-java8-set-default
-apt-get update && apt-get install -y maven
\ No newline at end of file
+sleep $[ ( $RANDOM % 10 )  + 1 ]s
 
 Review comment:
   happy to do it, however I noticed a lot less failed image build after adding 
that. I wonder if the fact that we send 20 requests at the same time for the 
same package might be a problem.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10600: Gluon Interpolation Layer

2018-04-19 Thread GitBox
spidyDev commented on issue #10600: Gluon Interpolation Layer
URL: 
https://github.com/apache/incubator-mxnet/issues/10600#issuecomment-382775290
 
 
   @reminisce : Please label as : Gluon, question, operator


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182784310
 
 

 ##
 File path: ci/docker/install/ubuntu_tutorials.sh
 ##
 @@ -0,0 +1,26 @@
+#!/bin/bash
+
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# build and install are separated so changes to build don't invalidate
+# the whole docker cache for the image
+
+set -ex
+apt-get install graphviz python-opencv
+pip2 install jupyter matplotlib Pillow opencv-python scipy scikit-learn 
h5py==2.8.0rc1 graphviz
+pip3 install jupyter matplotlib Pillow opencv-python scipy scikit-learn 
h5py==2.8.0rc1 graphviz
 
 Review comment:
   ok will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182784791
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
 
 Review comment:
   that is the timeout for the execution of a single notebook. Good point, I am 
not sure whether that would fail the test or just move on.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182784955
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
 
 Review comment:
   will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182784893
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
 
 Review comment:
   will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10599: [MKLDNN Bug] MKLDNN eats lots of memory and then crash down.

2018-04-19 Thread GitBox
spidyDev commented on issue #10599: [MKLDNN Bug] MKLDNN eats lots of memory and 
then crash down.
URL: 
https://github.com/apache/incubator-mxnet/issues/10599#issuecomment-382776715
 
 
   @reminisce : Please label as : MKL, bug. We might need a label for MKLDNN.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182786851
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
 
 Review comment:
   The filepath is unique per tutorial, it actually supports parallelization 
pretty well.
   Parallelization with nose test is much simpler than we discussed last time, 
just add:
   `--processes=8 --process-timeout=1800 (--process-restartworker)`
   It does work with this test, reducing the time necessary to run them by 60% 
with 8 workers. However, I am not sure how mature this nosetest plugin is. I 
think the fact these tests spawn an extra process confuses nosetest and I have 
witnessed some nostests workers crashing that leave python processes orphans, 
using up GPU memory, and on top of that that particular test case was 
considered successful.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10598: How to assign weight for each class in the fine-tuning example?

2018-04-19 Thread GitBox
spidyDev commented on issue #10598: How to assign weight for each class in the 
fine-tuning example?
URL: 
https://github.com/apache/incubator-mxnet/issues/10598#issuecomment-382778049
 
 
   @lixiangchun : Please do post your question on : discuss.mxnet.io
   
   @reminisce : Please label as : Question, HowTo
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182787563
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -349,6 +349,7 @@ sanity_check() {
 tools/license_header.py check
 make cpplint rcpplint jnilint
 make pylint
+nosetests-3.4 tests/tutorials/test_sanity_tutorials.py
 
 Review comment:
   Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10588: Error while trying to run LeakyRelu layer with 1-d input

2018-04-19 Thread GitBox
spidyDev commented on issue #10588: Error while trying to run LeakyRelu layer 
with 1-d input
URL: 
https://github.com/apache/incubator-mxnet/issues/10588#issuecomment-382778557
 
 
   @nswamy  Please label as : Gluon, Operator, Question


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory

2018-04-19 Thread GitBox
KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory
URL: 
https://github.com/apache/incubator-mxnet/issues/10532#issuecomment-382778869
 
 
   Addition: I've measured time again with .wait_to_read() and calculaton time 
was 330 seconds but copying time became 2400 seconds!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] David-Levinthal commented on issue #10505: Profiler profiler shoudl collect call durations not just timestamps

2018-04-19 Thread GitBox
David-Levinthal commented on issue #10505: Profiler  profiler shoudl collect 
call durations not just timestamps
URL: 
https://github.com/apache/incubator-mxnet/issues/10505#issuecomment-382778960
 
 
   you mean modify the title to start with Profiler (upper case P)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10586: Cannot build ubuntu_gpu CI docker image (java8 sdk missing)

2018-04-19 Thread GitBox
spidyDev commented on issue #10586: Cannot build ubuntu_gpu CI docker image 
(java8 sdk missing)
URL: 
https://github.com/apache/incubator-mxnet/issues/10586#issuecomment-382779543
 
 
   @ThomasDelteil  @marcoabreu  Could we tag this issue appropriately? Thanks. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spidyDev commented on issue #10586: Cannot build ubuntu_gpu CI docker image (java8 sdk missing)

2018-04-19 Thread GitBox
spidyDev commented on issue #10586: Cannot build ubuntu_gpu CI docker image 
(java8 sdk missing)
URL: 
https://github.com/apache/incubator-mxnet/issues/10586#issuecomment-382779543
 
 
   @ThomasDelteil  @marcoabreu  Could we label this issue appropriately? 
Thanks. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #10580: inference results unstable in mxnet_mkl-1.2.0b20180416

2018-04-19 Thread GitBox
TaoLv commented on issue #10580: inference results unstable in 
mxnet_mkl-1.2.0b20180416 
URL: 
https://github.com/apache/incubator-mxnet/issues/10580#issuecomment-382780304
 
 
   update:
   @dwSun could you help to try this branch to see if this issue still there?
   https://github.com/TaoLv/incubator-mxnet/tree/fix-SetMKLMem
   
   Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182789576
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -385,6 +386,28 @@ unittest_ubuntu_python2_gpu() {
 nosetests-2.7 --verbose tests/python/gpu
 }
 
+tutorialtest_ubuntu_python3_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python3
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
+}
+
+tutorialtest_ubuntu_python2_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python2
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
 
 Review comment:
   nosetest is trying to be smart but in this case since we are spawning a new 
process we get a whole load of unwanted extra logging:
   - the raw text of the notebook
   - the debug print out of jupyter (e.g the heartbeat signals of the kernels)
   I am already printing the errors and warning (which are captured even with 
--nologcapture)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182789983
 
 

 ##
 File path: docs/mxdoc.py
 ##
 @@ -367,7 +367,8 @@ def add_buttons(app, docname, source):
 # source[i] = '\n'.join(lines)
 
 def setup(app):
-app.connect("builder-inited", build_mxnet)
+if os.getenv('MXNET_DOCS_BUILD_MXNET', '1') == '1':
 
 Review comment:
   will do


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182789576
 
 

 ##
 File path: ci/docker/runtime_functions.sh
 ##
 @@ -385,6 +386,28 @@ unittest_ubuntu_python2_gpu() {
 nosetests-2.7 --verbose tests/python/gpu
 }
 
+tutorialtest_ubuntu_python3_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python3
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
+}
+
+tutorialtest_ubuntu_python2_gpu() {
+set -ex
+cd /work/mxnet/docs
+export MXNET_DOCS_BUILD_MXNET=0
+make html
+export MXNET_STORAGE_FALLBACK_LOG_VERBOSE=0
+export PYTHONPATH=/work/mxnet/python/
+export MXNET_TUTORIAL_TEST_KERNEL=python2
+cd /work/mxnet/tests/tutorials && nosetests-3.4 test_tutorials.py 
--nologcapture
 
 Review comment:
   nosetest is trying to be smart but in this case since we are spawning a new 
process we get a whole load of unwanted extra logging:
   - the raw text of the notebook
   - the debug print out of jupyter (e.g the heartbeat signals of the kernels)
   I am already printing the errors and warning (which are captured even with 
--nologcapture)
   
   I use nosetest-3.4 to run the test as it supports unicode by default, 
however `MXNET_TUTORIAL_TEST_KERNEL=python2` is what controls the environment 
the notebooks are being run


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182790384
 
 

 ##
 File path: tests/tutorials/test_sanity_tutorials.py
 ##
 @@ -0,0 +1,81 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+import glob
+import os
+import re
+
+# White list of non-downloadable tutorials
 
 Review comment:
   That's the tests we are ok for them not being downloadable as jupyter 
notebooks, typically C++, Scala, R tutorials


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182790921
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
+nbformat.write(notebook, output_file)
+output_nb = open(output_file, mode='r')
+for line in output_nb:
+if "Warning:" in line:
+errors.append("Warning:\n"+line)
+if len(errors) > 0:
+print('\n'.join(errors))
+return False
+return True
+
+
+
+def test_basic_ndarray():
+   assert _test_tutorial_nb('basic/ndarray')
+
+def test_basic_ndarray_indexing():
+assert _test_tutorial_nb('basic/ndarray_indexing')
+
+def test_basic_symbol():
+assert _test_tutorial_nb('basic/symbol')
+
+def test_basic_module():
+assert _test_tutorial_nb('basic/module')
+
+def test_basic_data():
+assert _test_tutorial_nb('basic/data')
+
+def test_gluon_customop():
+assert _test_tutorial_nb('gluon/customop')
+
+def test_gluon_data_augmentation():
+assert _test_tutorial_nb('gluon/data_augmentation')
+
+def test_gluon_datasets():
+assert True
+# Investigating flakiness with docker
+#assert _test_tutorial_nb('gluon/datasets')
 
 Review comment:
   That is a weird one, might leave out for this iteration and investigate more 
next week. It runs fine outside the container and in the notebook but hangs 
indefinitely when run through nosetest in a container using up all CPUs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182790921
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
+nbformat.write(notebook, output_file)
+output_nb = open(output_file, mode='r')
+for line in output_nb:
+if "Warning:" in line:
+errors.append("Warning:\n"+line)
+if len(errors) > 0:
+print('\n'.join(errors))
+return False
+return True
+
+
+
+def test_basic_ndarray():
+   assert _test_tutorial_nb('basic/ndarray')
+
+def test_basic_ndarray_indexing():
+assert _test_tutorial_nb('basic/ndarray_indexing')
+
+def test_basic_symbol():
+assert _test_tutorial_nb('basic/symbol')
+
+def test_basic_module():
+assert _test_tutorial_nb('basic/module')
+
+def test_basic_data():
+assert _test_tutorial_nb('basic/data')
+
+def test_gluon_customop():
+assert _test_tutorial_nb('gluon/customop')
+
+def test_gluon_data_augmentation():
+assert _test_tutorial_nb('gluon/data_augmentation')
+
+def test_gluon_datasets():
+assert True
+# Investigating flakiness with docker
+#assert _test_tutorial_nb('gluon/datasets')
 
 Review comment:
   That is a weird one, might leave out for this iteration and investigate more 
next week. It runs fine outside the container and in the notebook but hangs 
indefinitely when run through nosetest in a container.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add tutorial tests to the CI

2018-04-19 Thread GitBox
ThomasDelteil commented on a change in pull request #10608: [MXNET-292] Add 
tutorial tests to the CI
URL: https://github.com/apache/incubator-mxnet/pull/10608#discussion_r182790921
 
 

 ##
 File path: tests/tutorials/test_tutorials.py
 ##
 @@ -0,0 +1,187 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+#
+#   http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+#pylint: disable=no-member, too-many-locals, too-many-branches, no-self-use, 
broad-except, lost-exception, too-many-nested-blocks, too-few-public-methods, 
invalid-name
+"""
+This script converts all python tutorials into python script
+and tests whether there is any warning or error.
+After running python script, it will also convert markdown files
+to notebooks to make sure notebook execution has no error.
+"""
+import os
+import warnings
+import imp
+import shutil
+import time
+import argparse
+import traceback
+import nbformat
+from nbconvert.preprocessors import ExecutePreprocessor
+import sys
+
+
+TIME_OUT = 1800
+temp_dir = 'tmp_notebook'
+
+def _test_tutorial_nb(tutorial):
+"""Run tutorial jupyter notebook to catch any execution error.
+
+Parameters
+--
+tutorial : str
+tutorial name in folder/tutorial format
+"""
+
+tutorial_dir = os.path.join(os.path.dirname(__file__), '..', '..', 'docs', 
'_build', 'html', 'tutorials')
+tutorial_path = os.path.join(*([tutorial_dir] + tutorial.split('/')))
+
+kernel = os.getenv('MXNET_TUTORIAL_TEST_KERNEL', None)
+no_cache = os.getenv('MXNET_TUTORIAL_TEST_NO_CACHE', False)
+
+working_dir = os.path.join(*([temp_dir] + tutorial.split('/')))
+
+if no_cache:
+print("Cleaning and setting up temp directory 
'{}'".format(working_dir))
+shutil.rmtree(temp_dir, ignore_errors=True)
+
+errors = []
+notebook = None
+if not os.path.isdir(working_dir):
+os.makedirs(working_dir)
+try:
+notebook = nbformat.read(tutorial_path + '.ipynb', as_version=4)
+if kernel is not None:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT, 
kernel_name=kernel)
+else:
+eprocessor = ExecutePreprocessor(timeout=TIME_OUT)
+nb, stuff = eprocessor.preprocess(notebook, {'metadata': {'path': 
working_dir}})
+print(stuff)
+except Exception as err:
+err_msg = str(err)
+errors.append(err_msg)
+finally:
+if notebook is not None:
+output_file = os.path.join(working_dir, "output.txt")
+nbformat.write(notebook, output_file)
+output_nb = open(output_file, mode='r')
+for line in output_nb:
+if "Warning:" in line:
+errors.append("Warning:\n"+line)
+if len(errors) > 0:
+print('\n'.join(errors))
+return False
+return True
+
+
+
+def test_basic_ndarray():
+   assert _test_tutorial_nb('basic/ndarray')
+
+def test_basic_ndarray_indexing():
+assert _test_tutorial_nb('basic/ndarray_indexing')
+
+def test_basic_symbol():
+assert _test_tutorial_nb('basic/symbol')
+
+def test_basic_module():
+assert _test_tutorial_nb('basic/module')
+
+def test_basic_data():
+assert _test_tutorial_nb('basic/data')
+
+def test_gluon_customop():
+assert _test_tutorial_nb('gluon/customop')
+
+def test_gluon_data_augmentation():
+assert _test_tutorial_nb('gluon/data_augmentation')
+
+def test_gluon_datasets():
+assert True
+# Investigating flakiness with docker
+#assert _test_tutorial_nb('gluon/datasets')
 
 Review comment:
   That is a weird one, might leave out for this iteration and investigate more 
next week. It runs fine outside the container and in the notebook but hangs 
indefinitely when run through nosetest in a container using up all GPUs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182798789
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
 
 Review comment:
   This is because mxnet doesn't point to the latest commit in mkldnn master 
branch, right now? 
   This is indeed awkward. Ideally, we don't want users to do it manually. but 
on the other hand, it's unclear if we should move to the latest commit in 
mkldnn repo right now.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182802601
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+5. modify mxnet CMakeLists:
+
+disable cuda and cudnn if you don't have cuda library and enable MKLDNN
+
+```
+mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
+mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) 
+mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE)
+mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" OFF)
+```
+
+add line `add_definitions(-DMXNET_USE_MKLDNN=1)` so that it can build with 
openblas.
+
+```
+if(USE_MKL_IF_AVAILABLE)
+  if(USE_MKLDNN)
+add_subdirectory(3rdparty/mkldnn)
+include_directories(3rdparty/mkldnn/include)
+list(APPEND mxnet_LINKER_LIBS mkldnn)
+add_definitions(-DMXNET_USE_MKLDNN=1)
+  endif()
+  find_package(MKL)
+```
+
+6. Modify `incubator-mxnet\src\operator\tensor\elemwise_sum.h`:
+
+Modify `Sum` in `line 40,73,80,88,94,97` to `Sum2` since it has conflicts with 
`Sum` in MKLDNN.
+
+7. Start a Visual Studio command prompt.
+8. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G "Visual Studio 14 Win64" ..
+```
+
+Note that you should close the openmp since MSVC doesn't support openMP3.0. 
Enable MKLDNN with `MKLDNN_VERBOSE=1`.
 
 Review comment:
   do we need to turn off openmp in cmake?
   why do we enable MKLDNN_VERBOSE here?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r18280
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+5. modify mxnet CMakeLists:
+
+disable cuda and cudnn if you don't have cuda library and enable MKLDNN
+
+```
+mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
+mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) 
+mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE)
+mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" OFF)
+```
+
+add line `add_definitions(-DMXNET_USE_MKLDNN=1)` so that it can build with 
openblas.
+
+```
+if(USE_MKL_IF_AVAILABLE)
+  if(USE_MKLDNN)
+add_subdirectory(3rdparty/mkldnn)
+include_directories(3rdparty/mkldnn/include)
+list(APPEND mxnet_LINKER_LIBS mkldnn)
+add_definitions(-DMXNET_USE_MKLDNN=1)
+  endif()
+  find_package(MKL)
+```
 
 Review comment:
   i'm not sure why we need to do this? If you want to turn some options off in 
cmake, you can do it in the command line. we definitely don't want users to 
modify CMakeLists.txt


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182802321
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+5. modify mxnet CMakeLists:
+
+disable cuda and cudnn if you don't have cuda library and enable MKLDNN
+
+```
+mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
+mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) 
+mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE)
+mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" OFF)
+```
+
+add line `add_definitions(-DMXNET_USE_MKLDNN=1)` so that it can build with 
openblas.
+
+```
+if(USE_MKL_IF_AVAILABLE)
+  if(USE_MKLDNN)
+add_subdirectory(3rdparty/mkldnn)
+include_directories(3rdparty/mkldnn/include)
+list(APPEND mxnet_LINKER_LIBS mkldnn)
+add_definitions(-DMXNET_USE_MKLDNN=1)
+  endif()
+  find_package(MKL)
+```
+
+6. Modify `incubator-mxnet\src\operator\tensor\elemwise_sum.h`:
+
+Modify `Sum` in `line 40,73,80,88,94,97` to `Sum2` since it has conflicts with 
`Sum` in MKLDNN.
+
+7. Start a Visual Studio command prompt.
+8. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G "Visual Studio 14 Win64" ..
+```
 
 Review comment:
   what is step 7? can we not do it in a normal command prompt?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182801491
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+5. modify mxnet CMakeLists:
+
+disable cuda and cudnn if you don't have cuda library and enable MKLDNN
+
+```
+mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
+mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) 
+mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE)
+mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" OFF)
+```
+
+add line `add_definitions(-DMXNET_USE_MKLDNN=1)` so that it can build with 
openblas.
+
+```
+if(USE_MKL_IF_AVAILABLE)
+  if(USE_MKLDNN)
+add_subdirectory(3rdparty/mkldnn)
+include_directories(3rdparty/mkldnn/include)
+list(APPEND mxnet_LINKER_LIBS mkldnn)
+add_definitions(-DMXNET_USE_MKLDNN=1)
+  endif()
+  find_package(MKL)
+```
+
+6. Modify `incubator-mxnet\src\operator\tensor\elemwise_sum.h`:
+
+Modify `Sum` in `line 40,73,80,88,94,97` to `Sum2` since it has conflicts with 
`Sum` in MKLDNN.
 
 Review comment:
   is this a bug? if it is, can you submit a PR to fix it?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
zheng-da commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182801243
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
 
 Review comment:
   is it possible to do everything from step 2 to 4 in a single command line? 
once we update the mkldnn submodule, we won't need step 2. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zheng-da commented on issue #10612: MXPredReshape support Windows

2018-04-19 Thread GitBox
zheng-da commented on issue #10612: MXPredReshape support Windows
URL: https://github.com/apache/incubator-mxnet/pull/10612#issuecomment-382793967
 
 
   are you using prediction c API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182808436
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
 
 Review comment:
   I have to agree with @zheng-da would refrain from adding this instruction. 
We should only support the versions which are part of our repository. If 
there's a need to upgrade, we'll have to do it here. Otherwise we're going to 
run into issues with people who upgrade on their own while we have not 
validated that version with our CI.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
marcoabreu commented on a change in pull request #10613: Add Windows MKLDNN 
Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#discussion_r182808832
 
 

 ##
 File path: MKL_README.md
 ##
 @@ -17,3 +17,106 @@ Installing and enabling the full MKL installation enables 
MKL support for all op
 
   5. Run 'sudo python setup.py install'
 
+
+## Build/Install MXNet with a full MKL installation on Windows:
+
+To build and install MXNet yourself, you need the following dependencies. 
Install the required dependencies:
+
+1. If [Microsoft Visual Studio 
2015](https://www.visualstudio.com/vs/older-downloads/) is not already 
installed, download and install it. You can download and install the free 
community edition.
+2. Download and Install [CMake](https://cmake.org/) if it is not already 
installed.
+3. Download and install 
[OpenCV](http://sourceforge.net/projects/opencvlibrary/files/opencv-win/3.0.0/opencv-3.0.0.exe/download).
+4. Unzip the OpenCV package.
+5. Set the environment variable ```OpenCV_DIR``` to point to the ```OpenCV 
build directory``` (```C:\opencv\build\x64\vc14``` for example). Also, you need 
to add the OpenCV bin directory (```C:\opencv\build\x64\vc14\bin``` for 
example) to the ``PATH`` variable.
+6. If you have Intel Math Kernel Library (MKL) installed, set ```MKL_ROOT``` 
to point to ```MKL``` directory that contains the ```include``` and ```lib```. 
Typically, you can find the directory in
+```C:\Program Files 
(x86)\IntelSWTools\compilers_and_libraries_2018\windows\mkl```.
+7. If you don't have the Intel Math Kernel Library (MKL) installed, download 
and install 
[OpenBlas](http://sourceforge.net/projects/openblas/files/v0.2.14/). Note that 
you should also download ```mingw64.dll.zip`` along with openBLAS and add them 
to PATH.
+8. Set the environment variable ```OpenBLAS_HOME``` to point to the 
```OpenBLAS``` directory that contains the ```include``` and ```lib``` 
directories. Typically, you can find the directory in ```C:\Program files 
(x86)\OpenBLAS\```. 
+
+After you have installed all of the required dependencies, build the MXNet 
source code:
+
+1. Download the MXNet source code from 
[GitHub](https://github.com/dmlc/mxnet). Don't forget to pull the submodules:
+```
+git clone https://github.com/apache/incubator-mxnet.git ~/mxnet --recursive
+```
+
+2. Update mkldnn to the newest:
+```
+cd 3rdparty/mkldnn/ && git checkout master && git pull
+```
+
+Or you can follow the [#216](https://github.com/intel/mkl-dnn/pull/216) to do 
some changes directly.
+
+3. Download [MKLML small 
library](https://github.com/intel/mkl-dnn/releases/download/v0.13/mklml_win_2018.0.2.20180127.zip):
+
+Extract it to `3rdparty/mkldnn/external` manually.
+
+4. Copy file `3rdparty/mkldnn/config_template.vcxproj` to incubator-mxnet root.
+
+5. modify mxnet CMakeLists:
+
+disable cuda and cudnn if you don't have cuda library and enable MKLDNN
+
+```
+mxnet_option(USE_CUDA "Build with CUDA support"   OFF)
+mxnet_option(USE_CUDNN"Build with cudnn support"  OFF) 
+mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE)
+mxnet_option(ENABLE_CUDA_RTC  "Build with CUDA runtime compilation 
support" OFF)
+```
+
+add line `add_definitions(-DMXNET_USE_MKLDNN=1)` so that it can build with 
openblas.
+
+```
+if(USE_MKL_IF_AVAILABLE)
+  if(USE_MKLDNN)
+add_subdirectory(3rdparty/mkldnn)
+include_directories(3rdparty/mkldnn/include)
+list(APPEND mxnet_LINKER_LIBS mkldnn)
+add_definitions(-DMXNET_USE_MKLDNN=1)
+  endif()
+  find_package(MKL)
+```
+
+6. Modify `incubator-mxnet\src\operator\tensor\elemwise_sum.h`:
+
+Modify `Sum` in `line 40,73,80,88,94,97` to `Sum2` since it has conflicts with 
`Sum` in MKLDNN.
+
+7. Start a Visual Studio command prompt.
+8. Use [CMake](https://cmake.org/) to create a Visual Studio solution in 
```./build``` or some other directory. Make sure to specify the architecture in 
the 
+[CMake](https://cmake.org/) command:
+```
+mkdir build
+cd build
+cmake -G "Visual Studio 14 Win64" ..
+```
+
+Note that you should close the openmp since MSVC doesn't support openMP3.0. 
Enable MKLDNN with `MKLDNN_VERBOSE=1`.
 
 Review comment:
   our cmakefile should detect this automatically and disable the features 
accordingly. We should not expect users to change the configuration just in 
order to get it compiled locally. In the end, that's the purpose of cmake, 
right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory

2018-04-19 Thread GitBox
KaiserSozo commented on issue #10532: NDArray failed to allocate CPU memory
URL: 
https://github.com/apache/incubator-mxnet/issues/10532#issuecomment-382801189
 
 
   So as I concluded I can call wait_to_read() not each time I'm updating 
weights.set_data(weights.data() + output), but for example with 1:10 or 1:20 
frequency. That should speed up the process drastically. As I see from the 
debugger actual data already is in 'weights' right after calling set_data, even 
if I'm not calling wait_to_read() (this explains such a big growth of memory 
usage). So new iteration of calculation begins to work with actual data. Am I 
right in that? 
   And it would be nice if you explain or give a link to the information about 
memory usage and data saving mechanism for better understanding the subect.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#issuecomment-382801615
 
 
   yes you are right. I will optimize this instruction and test locally first. 
Thanks for suggestion:)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction

2018-04-19 Thread GitBox
xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#issuecomment-382801615
 
 
   @zheng-da @marcoabreu yes you are right. I will optimize this instruction 
and test locally first. Thanks for suggestion:)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudhacharya commented on issue #10588: Error while trying to run LeakyRelu layer with 1-d input

2018-04-19 Thread GitBox
anirudhacharya commented on issue #10588: Error while trying to run LeakyRelu 
layer with 1-d input
URL: 
https://github.com/apache/incubator-mxnet/issues/10588#issuecomment-382802768
 
 
   @spidyDev I think it is more of a bug than a question.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] spirosparaskevas commented on issue #8869: Outdated documentation for installing R packages

2018-04-19 Thread GitBox
spirosparaskevas commented on issue #8869: Outdated documentation for 
installing R packages
URL: 
https://github.com/apache/incubator-mxnet/issues/8869#issuecomment-382809254
 
 
   First of all thanks for confirming this situation. I have just verified this 
awkwardness. Can u suggest a workaround?



This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182810260
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -102,6 +102,48 @@ inline bool ElemwiseStorageType(const nnvm::NodeAttrs& 
attrs,
  in_attrs, out_attrs);
 }
 
+template
+inline bool ElemwisePreferDenseStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  using namespace common;
+  CHECK_EQ(in_attrs->size(), 2);
+  CHECK_EQ(out_attrs->size(), 1);
+  const auto lhs_stype = (*in_attrs)[0];
+  const auto rhs_stype = (*in_attrs)[1];
+  bool dispatched = false;
+  const bool invalid_ctx = cpu_only && dev_mask != mshadow::cpu::kDevMask;
+  const auto dispatch_ex = invalid_ctx ? DispatchMode::kFComputeFallback :
+ DispatchMode::kFComputeEx;
+  if (!dispatched && common::ContainsOnlyStorage(*in_attrs, kDefaultStorage)) {
 
 Review comment:
   nit: common::ContainsOnlyStorage -> ContainsOnlyStorage


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182810702
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -102,6 +102,48 @@ inline bool ElemwiseStorageType(const nnvm::NodeAttrs& 
attrs,
  in_attrs, out_attrs);
 }
 
+template
+inline bool ElemwisePreferDenseStorageType(const nnvm::NodeAttrs& attrs,
 
 Review comment:
   This function should be moved to binary_op instead of elemwise_op_common.h


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182810971
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -102,6 +102,48 @@ inline bool ElemwiseStorageType(const nnvm::NodeAttrs& 
attrs,
  in_attrs, out_attrs);
 }
 
+template
+inline bool ElemwisePreferDenseStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  using namespace common;
+  CHECK_EQ(in_attrs->size(), 2);
+  CHECK_EQ(out_attrs->size(), 1);
+  const auto lhs_stype = (*in_attrs)[0];
+  const auto rhs_stype = (*in_attrs)[1];
+  bool dispatched = false;
+  const bool invalid_ctx = cpu_only && dev_mask != mshadow::cpu::kDevMask;
+  const auto dispatch_ex = invalid_ctx ? DispatchMode::kFComputeFallback :
+ DispatchMode::kFComputeEx;
+  if (!dispatched && common::ContainsOnlyStorage(*in_attrs, kDefaultStorage)) {
+// dns, dns ... -> dns
+dispatched = storage_type_assign(out_attrs, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+  if (!dispatched && rsp && ContainsOnlyStorage(*in_attrs, kRowSparseStorage)) 
{
+// rsp, rsp, ... -> rsp
+dispatched = storage_type_assign(out_attrs, kRowSparseStorage,
+ dispatch_mode, dispatch_ex);
+  }
+  if (!dispatched && csr && common::ContainsOnlyStorage(*in_attrs, 
kCSRStorage)) {
+// csr, csr, ... -> csr
+dispatched = storage_type_assign(out_attrs, kCSRStorage,
+ dispatch_mode, dispatch_ex);
+  }
+  if (!dispatched && ((lhs_stype == kDefaultStorage && rhs_stype == 
kCSRStorage) ||
+  (lhs_stype == kCSRStorage && rhs_stype == 
kDefaultStorage))) {
+// dense, csr -> csr / csr, dense -> csr
 
 Review comment:
   I'm not sure what this comment means. Output should be dense??


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182813227
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_op-inl.h
 ##
 @@ -374,6 +374,72 @@ void ElemwiseBinaryOp::CsrCsrOp(mshadow::Stream *s,
   }
 }
 
+template
+struct ElemwiseDnsZeroKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, const DType* 
dns_data,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows*num_cols) {
+  KERNEL_ASSIGN(out[i], req, OP::Map(dns_data[i], DType(0.0f)));
+}
+  }
+};
+
+template
+struct ElemwiseDnsCsrDnsKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, DType* 
dns_data,
+ const DType* csr_data, const IType* csr_indices, 
const CType* csr_indptr,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows) {
+  for (int j = csr_indptr[i]; j < csr_indptr[i+1]; ++j) {
+KERNEL_ASSIGN(out[i * num_cols + csr_indices[j]], req,
+  OP::Map(dns_data[i * num_cols + csr_indices[j]], 
csr_data[j]));
+  }
+}
+  }
+};
+
+/*! \brief DNS -op- CSR binary operator for non-canonical NDArray */
+template
+void ElemwiseBinaryOp::DnsCsrDnsOp(mshadow::Stream *s,
+   const nnvm::NodeAttrs &attrs,
+   const OpContext &ctx,
+   const NDArray &dns,
+   const NDArray &csr,
+   const OpReqType req,
+   const NDArray &output,
+   const bool reverse) {
+  using namespace mshadow;
+  using namespace mxnet_op;
+  CHECK_EQ(dns.storage_type(), kDefaultStorage);
+  CHECK_EQ(csr.storage_type(), kCSRStorage);
+  const nnvm::dim_t num_csr_rows = csr.shape()[0];
+  const nnvm::dim_t num_csr_cols = csr.shape()[1];
+  mxnet_op::Kernel, cpu>::Launch(
+s, output.data().Size(), req, output.data().dptr(), 
dns.data().dptr(),
+num_csr_rows, num_csr_cols);
+  TBlob csr_data = csr.data();
+  TBlob csr_indices = csr.aux_data(csr::kIdx);
+  TBlob csr_indptr = csr.aux_data(csr::kIndPtr);
+  MSHADOW_SGL_DBL_TYPE_SWITCH(csr_data.type_flag_, DataType, {
 
 Review comment:
   The function is already templated. DType === DataType, right? 
   I think the part of the template "typename DType, typename IType, typename 
CType" can be removed, which makes it easier for other operators to call this 
function. 
   I noticed that RspRspOp is also templated. We probably canmove the DType 
switches inside the implementation to make the code cleaner, but that will be 
out of the scope of this PR. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182811309
 
 

 ##
 File path: src/operator/tensor/elemwise_binary_op-inl.h
 ##
 @@ -374,6 +374,72 @@ void ElemwiseBinaryOp::CsrCsrOp(mshadow::Stream *s,
   }
 }
 
+template
+struct ElemwiseDnsZeroKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, const DType* 
dns_data,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows*num_cols) {
+  KERNEL_ASSIGN(out[i], req, OP::Map(dns_data[i], DType(0.0f)));
+}
+  }
+};
+
+template
+struct ElemwiseDnsCsrDnsKernel {
+  template
+  static void inline Map(int i, const OpReqType req, DType* out, DType* 
dns_data,
+ const DType* csr_data, const IType* csr_indices, 
const CType* csr_indptr,
+ const nnvm::dim_t num_rows, const nnvm::dim_t 
num_cols) {
+if (i < num_rows) {
+  for (int j = csr_indptr[i]; j < csr_indptr[i+1]; ++j) {
+KERNEL_ASSIGN(out[i * num_cols + csr_indices[j]], req,
+  OP::Map(dns_data[i * num_cols + csr_indices[j]], 
csr_data[j]));
+  }
+}
+  }
+};
+
+/*! \brief DNS -op- CSR binary operator for non-canonical NDArray */
 
 Review comment:
   Better to move/add description in elemwise_binary_op.h


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] Support elemwise_add/sub/max/min/hypot between dense and csr tensors

2018-04-19 Thread GitBox
eric-haibin-lin commented on a change in pull request #10550: [MXNET-320] 
Support elemwise_add/sub/max/min/hypot between dense and csr tensors
URL: https://github.com/apache/incubator-mxnet/pull/10550#discussion_r182810306
 
 

 ##
 File path: src/operator/elemwise_op_common.h
 ##
 @@ -102,6 +102,48 @@ inline bool ElemwiseStorageType(const nnvm::NodeAttrs& 
attrs,
  in_attrs, out_attrs);
 }
 
+template
+inline bool ElemwisePreferDenseStorageType(const nnvm::NodeAttrs& attrs,
+   const int dev_mask,
+   DispatchMode* dispatch_mode,
+   std::vector *in_attrs,
+   std::vector *out_attrs) {
+  using namespace common;
+  CHECK_EQ(in_attrs->size(), 2);
+  CHECK_EQ(out_attrs->size(), 1);
+  const auto lhs_stype = (*in_attrs)[0];
+  const auto rhs_stype = (*in_attrs)[1];
+  bool dispatched = false;
+  const bool invalid_ctx = cpu_only && dev_mask != mshadow::cpu::kDevMask;
+  const auto dispatch_ex = invalid_ctx ? DispatchMode::kFComputeFallback :
+ DispatchMode::kFComputeEx;
+  if (!dispatched && common::ContainsOnlyStorage(*in_attrs, kDefaultStorage)) {
+// dns, dns ... -> dns
+dispatched = storage_type_assign(out_attrs, kDefaultStorage,
+ dispatch_mode, DispatchMode::kFCompute);
+  }
+  if (!dispatched && rsp && ContainsOnlyStorage(*in_attrs, kRowSparseStorage)) 
{
+// rsp, rsp, ... -> rsp
+dispatched = storage_type_assign(out_attrs, kRowSparseStorage,
+ dispatch_mode, dispatch_ex);
+  }
+  if (!dispatched && csr && common::ContainsOnlyStorage(*in_attrs, 
kCSRStorage)) {
 
 Review comment:
   nit: common::ContainsOnlyStorage -> ContainsOnlyStorage


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong closed pull request #10612: MXPredReshape support Windows

2018-04-19 Thread GitBox
piiswrong closed pull request #10612: MXPredReshape support Windows
URL: https://github.com/apache/incubator-mxnet/pull/10612
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/include/mxnet/c_predict_api.h b/include/mxnet/c_predict_api.h
index a77d77702fe..cc1c2966bd7 100644
--- a/include/mxnet/c_predict_api.h
+++ b/include/mxnet/c_predict_api.h
@@ -134,7 +134,7 @@ MXNET_DLL int MXPredCreatePartialOut(const char* 
symbol_json_str,
  * \param out The reshaped predictor handle.
  * \return 0 when success, -1 when failure.
  */
-int MXPredReshape(mx_uint num_input_nodes,
+MXNET_DLL int MXPredReshape(mx_uint num_input_nodes,
   const char** input_keys,
   const mx_uint* input_shape_indptr,
   const mx_uint* input_shape_data,


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: MXPredReshape support Windows (#10612)

2018-04-19 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 162c54b  MXPredReshape support Windows (#10612)
162c54b is described below

commit 162c54b19c023b166b2fba6afa271ce7f184888c
Author: chinakook 
AuthorDate: Fri Apr 20 01:11:51 2018 +0800

MXPredReshape support Windows (#10612)
---
 include/mxnet/c_predict_api.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/include/mxnet/c_predict_api.h b/include/mxnet/c_predict_api.h
index a77d777..cc1c296 100644
--- a/include/mxnet/c_predict_api.h
+++ b/include/mxnet/c_predict_api.h
@@ -134,7 +134,7 @@ MXNET_DLL int MXPredCreatePartialOut(const char* 
symbol_json_str,
  * \param out The reshaped predictor handle.
  * \return 0 when success, -1 when failure.
  */
-int MXPredReshape(mx_uint num_input_nodes,
+MXNET_DLL int MXPredReshape(mx_uint num_input_nodes,
   const char** input_keys,
   const mx_uint* input_shape_indptr,
   const mx_uint* input_shape_data,

-- 
To stop receiving notification emails like this one, please contact
j...@apache.org.


[GitHub] piiswrong commented on issue #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
piiswrong commented on issue #10607: New tutorial on how to create a new custom 
layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#issuecomment-382813098
 
 
   We already have one here 
http://gluon.mxnet.io/chapter03_deep-neural-networks/custom-layer.html
   Can we merge these?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182609344
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182821807
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182811398
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182605957
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
 
 Review comment:
   MXNet instead of MxNet


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182607051
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
 
 Review comment:
   There is one very special method named hybridize(), though, which I am going 
to cover before moving to a more complex example of a custom layer. -> We will 
now discuss a special method called hybridize() before moving on to more 
complex examples of custom layers.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182820362
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606148
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
 
 Review comment:
   Does __init__ count as a method?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182607395
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
 
 Review comment:
   but is harder to debug


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606809
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
 
 Review comment:
   The rest of methods -> The other methods


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606204
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
 
 Review comment:
   during the forward pass.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606410
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
 
 Review comment:
Notice, that it's not required to provide what the block should do during 
backpropagation.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606791
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
 
 Review comment:
   A bit more explanation needed here; "work with parameters of block".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182607884
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182818000
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182607700
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
 
 Review comment:
   Compared


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service,

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182608065
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182816362
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182814901
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182606114
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
 
 Review comment:
   The only instance method that needs to be implemented


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182607687
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
 
 Review comment:
   introduced


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this servic

[GitHub] thomelane commented on a change in pull request #10607: New tutorial on how to create a new custom layer in Gluon

2018-04-19 Thread GitBox
thomelane commented on a change in pull request #10607: New tutorial on how to 
create a new custom layer in Gluon
URL: https://github.com/apache/incubator-mxnet/pull/10607#discussion_r182608126
 
 

 ##
 File path: docs/tutorials/python/custom_layer.md
 ##
 @@ -0,0 +1,247 @@
+
+# How to write a custom layer in Apache MxNet Gluon API
+
+While Gluon API for Apache MxNet comes with [a decent number of predefined 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), at some 
point one may find that a new layer is needed. Adding a new layer in Gluon API 
is straightforward, yet there are a few things that one needs to keep in mind.
+
+In this article, I will cover how to create a new layer from scratch, how to 
use it, what are possible pitfalls and how to avoid them.
+
+## The simplest custom layer
+
+To create a new layer in Gluon API, one must create a class that inherits from 
[Block](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block)
 class. This class provides the most basic functionality, and all predefined 
layers inherit from it directly or via other subclasses. Because each layer in 
Apache MxNet inherits from `Block`, words "layer" and "block" are used 
interchangeably inside of the Apache MxNet community.
+
+The only instance method needed to be implemented is 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.forward),
 which defines what exactly your layer is going to do during forward 
propagation. Notice, that it doesn't require to provide what the block should 
do during backpropagation. Backpropagation pass for blocks is done by Apache 
MxNet for you. 
+
+In the example below, we define a new layer and implement `forward()` method 
to normalize input data by fitting it into a range of [0, 1].
+
+
+```python
+# Do some initial imports used throughout this tutorial 
+from __future__ import print_function
+import mxnet as mx
+from mxnet import nd, gluon, autograd
+from mxnet.gluon.nn import Dense
+mx.random.seed(1)  # Set seed for reproducable results
+```
+
+
+```python
+class NormalizationLayer(gluon.Block):
+def __init__(self):
+super(NormalizationLayer, self).__init__()
+
+def forward(self, x):
+return (x - nd.min(x)) / (nd.max(x) - nd.min(x))
+```
+
+The rest of methods of the `Block` class are already implemented, and majority 
of them are used to work with parameters of a block. There is one very special 
method named 
[hybridize()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.Block.hybridize),
 though, which I am going to cover before moving to a more complex example of a 
custom layer.
+
+## Hybridization and the difference between Block and HybridBlock
+
+Looking into the implementation of [existing 
layers](https://mxnet.incubator.apache.org/api/python/gluon/nn.html), one may 
find that more often a block inherits from a 
[HybridBlock](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock),
 instead of directly inheriting from `Block` class.
+
+The reason for that is that `HybridBlock` allows to write custom layers that 
can be used in imperative programming as well as in symbolic programming. It is 
convinient to support both ways, because of the different values these 
programming models bring. The imperative programming eases the debugging of the 
code - one can use regular debugging tools available in modern IDEs to go line 
by line through the computation. The symbolic programming provides faster 
execution speed, but harder to debug. You can learn more about the difference 
between symbolic vs. imperative programming from [this 
article](https://mxnet.incubator.apache.org/architecture/program_model.html).
+
+Because of these reasons it is recommended to develop a new layer using 
imperative model, but deploy it using symbolic model.
+
+Hybridization is a process that Apache MxNet uses to create a symbolic graph 
of a forward computation. Optimization of this computational graph allows to 
increase performance. Once the symbolic graph is created, Apache MxNet caches 
and reuses it for subsequent computations.
+
+To simplify support of both imperative and symbolic programming, Apache MxNet 
introduce the `HybridBlock` class. Compare to the `Block` class, `HybridBlock` 
already has its 
[forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.forward)
 method implemented, but it defines a 
[hybrid_forward()](https://mxnet.incubator.apache.org/api/python/gluon/gluon.html#mxnet.gluon.HybridBlock.hybrid_forward)
 method that needs to be implemented.
+
+From API point of view, the main difference between `forward()` and 
`hybrid_forward()` is an `F` argument. This argument sometimes is refered as a 
`backend` in the Apache MxNet community. Depending on if hybridization has been 
done or not, `F` can refer either to [mxnet.ndarray 
API

  1   2   3   >