[GitHub] yajiedesign commented on issue #10613: Add Windows MKLDNN Building Instruction

2018-04-21 Thread GitBox
yajiedesign commented on issue #10613: Add Windows MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#issuecomment-383278835
 
 
   i am fixed it https://github.com/apache/incubator-mxnet/pull/10629


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhaoxy2018 opened a new issue #10634: How to get internal output with C++ API?

2018-04-21 Thread GitBox
zhaoxy2018 opened a new issue #10634: How to get internal output with C++ API?
URL: https://github.com/apache/incubator-mxnet/issues/10634
 
 
   I want to get some intermediate results in the network with the C++ API. I 
found an old issue but the link suggested in the solution 
(https://github.com/dmlc/mxnet/tree/master/example/cpp/image-classification) is 
no longer valid now. So what is the proper way to inspect network intermediate 
results with the new C++ API?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction

2018-04-21 Thread GitBox
xinyu-intel commented on issue #10613: Add Windows MKLDNN Building Instruction
URL: https://github.com/apache/incubator-mxnet/pull/10613#issuecomment-383282908
 
 
   @yajiedesign Good J:) I think it is very helpful to simplify my workflow!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183207896
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   not sure if the omp and mklml are also in 3rdparty\\mkldnn\\


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183208559
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   no omp mklml is in the root path.i am copy it in cmake,
   the problem is that I can't figure out the location of mkldnn


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on issue #10626: Could you help build windows pypi package mxnet-cu90 for version 1.0.0 and version 1.1.0?

2018-04-21 Thread GitBox
yajiedesign commented on issue #10626: Could you help build windows pypi 
package mxnet-cu90 for version 1.0.0 and version 1.1.0?
URL: 
https://github.com/apache/incubator-mxnet/issues/10626#issuecomment-383291805
 
 
   all done


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #10633: [MXNET-346] Hard Sigmoid Operator

2018-04-21 Thread GitBox
chinakook commented on issue #10633: [MXNET-346] Hard Sigmoid Operator
URL: https://github.com/apache/incubator-mxnet/pull/10633#issuecomment-383292186
 
 
   It’s widely used in LSTM in Keras. Could add this op into LSTM or GRU, etc..


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209056
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   may be 3rdparty\\mkldnn\\Release\\mkldnn.dll


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209056
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   maybe 3rdparty\\mkldnn\\src\\Release\\mkldnn.dll


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209056
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   maybe 3rdparty\\mkldnn\\Release\\mkldnn.dll


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209056
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   maybe 3rdparty\\mkldnn\\src\\mkldnn.dll


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209561
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   seems 3rdparty\mkldnn\src\mkldnn.dll is right but still some deps are 
missing.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209684
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   yes,I don't know what is missing.Maybe msvcrt120.dll.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183209713
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -358,7 +358,7 @@ try {
   mkdir pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libmxnet.lib pkg_%BUILD_NAME%\\lib
   copy build_%BUILD_NAME%\\libmxnet.dll pkg_%BUILD_NAME%\\build
-  copy build_%BUILD_NAME%\\mkldnn.dll pkg_%BUILD_NAME%\\build
+  copy build_%BUILD_NAME%\\3rdparty\\mkldnn\\mkldnn.dll 
pkg_%BUILD_NAME%\\build
   copy build_%BUILD_NAME%\\libiomp5md.dll pkg_%BUILD_NAME%\\build
 
 Review comment:
   need some one login and look at it with depends.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] mrkn opened a new pull request #10635: Stop ignoring the given name of CompositeEvalMetric

2018-04-21 Thread GitBox
mrkn opened a new pull request #10635: Stop ignoring the given name of 
CompositeEvalMetric
URL: https://github.com/apache/incubator-mxnet/pull/10635
 
 
   mxnet.metric.CompositeEvalMetric ignores the given name in its constructor.
   It shoudn't be ignored.
   
   Before:
   
   ```
   >>> acc = mx.metric.CompositeEvalMetric(name='composite_xyz')
   >>> acc.name
   'composite'
   ```
   
   After:
   
   ```
   >>> acc = mx.metric.CompositeEvalMetric(name='composite_xyz')
   >>> acc.name
   'composite_xyz'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] barkntuncer opened a new issue #10636: How to install mxnet for CUDA8.0 properly? mxnet-cu80 error

2018-04-21 Thread GitBox
barkntuncer opened a new issue #10636: How to install mxnet for CUDA8.0 
properly? mxnet-cu80 error
URL: https://github.com/apache/incubator-mxnet/issues/10636
 
 
   ## Description
   Trying to install mxnet for CUDA-8.0 with mxnet-cu80 while following 
[here](https://mxnet.incubator.apache.org/install/index.html). When I try the 
python code below on the page, it gives error.

   
   ## Environment info (Required)
   Ubuntu 16.4 Python 3.5.2 CUDA-8.0 nvidia 950m
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   --Python Info--
   Version  : 3.5.2
   Compiler : GCC 5.4.0 20160609
   Build: ('default', 'Nov 23 2017 16:37:01')
   Arch : ('64bit', 'ELF')
   Pip Info---
   Version  : 8.1.1
   Directory: /usr/lib/python3/dist-packages/pip
   --MXNet Info---
   Version  : 0.11.0
   Directory: /usr/local/lib/python3.5/dist-packages/mxnet
   Commit Hash   : 53274b4a2b0d73f3fbdb10cfb5f9ed0c8263fda7
   --System Info--
   Platform : Linux-4.13.0-37-generic-x86_64-with-Ubuntu-16.04-xenial
   system   : Linux
   node : barkntuncer
   release  : 4.13.0-37-generic
   version  : #42~16.04.1-Ubuntu SMP Wed Mar 7 16:03:28 UTC 2018
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   Architecture:  x86_64
   CPU op-mode(s):32-bit, 64-bit
   Byte Order:Little Endian
   CPU(s):8
   On-line CPU(s) list:   0-7
   Thread(s) per core:2
   Core(s) per socket:4
   Socket(s): 1
   NUMA node(s):  1
   Vendor ID: GenuineIntel
   CPU family:6
   Model: 94
   Model name:Intel(R) Core(TM) i7-6700HQ CPU @ 2.60GHz
   Stepping:  3
   CPU MHz:   2600.000
   CPU max MHz:   3500,
   CPU min MHz:   800,
   BogoMIPS:  5184.00
   Virtualization:VT-x
   L1d cache: 32K
   L1i cache: 32K
   L2 cache:  256K
   L3 cache:  6144K
   NUMA node0 CPU(s): 0-7
   Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge 
mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx 
pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl 
xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq dtes64 
monitor ds_cpl vmx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid sse4_1 sse4_2 
x2apic movbe popcnt aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch 
cpuid_fault epb invpcid_single pti retpoline intel_pt rsb_ctxsw tpr_shadow vnmi 
flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid 
rtm mpx rdseed adx smap clflushopt xsaveopt xsavec xgetbv1 xsaves dtherm ida 
arat pln pts hwp hwp_notify hwp_act_window hwp_epp
   --Network Test--
   Setting timeout: 10
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0031 sec, LOAD: 
0.2751 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.1011 sec, LOAD: 
1.1429 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0016 sec, LOAD: 
0.1321 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0009 sec, LOAD: 0.5194 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0014 sec, 
LOAD: 0.2902 sec.
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0010 
sec, LOAD: 1.0151 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using PYTHON)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   terminate called after throwing an instance of 'dmlc::Errorterminate called 
after throwing an instance of 'dmlc::Error'
   '
 what():  [18:32:00] 
/home/travis/build/dmlc/mxnet-distro/mxnet-build/mshadow/mshadow/./tensor_gpu-inl.h:35:
 Check failed: e == cudaSuccess CUDA: unknown error
   
   Stack trace returned 9 entries:
   [bt] (0) /usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(+0x2ab998) 
[0x7fb7a6b54998]
   [bt] (1) /usr/local/lib/python3.5/dist-packages/mxnet/libmxnet.so(+0x2abda8) 
[0x7fb7a6b54da8]
   [bt] (2) 
/usr/local/lib/python3

[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215160
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -18,8 +18,8 @@ mxnet_option(USE_CUDNN"Build with cudnn support" 
 ON) # one could se
 mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON)
 mxnet_option(USE_LAPACK   "Build with lapack support" ON IF NOT MSVC)
 mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
-mxnet_option(USE_MKLML_MKL"Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
-mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
+mxnet_option(USE_MKLML_MKL"Use MKLDNN variant of MKL (if MKL found)" 
OFF IF (NOT APPLE))
 
 Review comment:
   Please keep the USE_MKL_IF_AVAILABLE argument - you're changing the default 
behaviour here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215160
 
 

 ##
 File path: CMakeLists.txt
 ##
 @@ -18,8 +18,8 @@ mxnet_option(USE_CUDNN"Build with cudnn support" 
 ON) # one could se
 mxnet_option(USE_SSE  "Build with x86 SSE instruction support" ON)
 mxnet_option(USE_LAPACK   "Build with lapack support" ON IF NOT MSVC)
 mxnet_option(USE_MKL_IF_AVAILABLE "Use MKL if found" ON)
-mxnet_option(USE_MKLML_MKL"Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
-mxnet_option(USE_MKLDNN   "Use MKLDNN variant of MKL (if MKL found)" 
ON IF USE_MKL_IF_AVAILABLE AND UNIX AND (NOT APPLE))
+mxnet_option(USE_MKLML_MKL"Use MKLDNN variant of MKL (if MKL found)" 
OFF IF (NOT APPLE))
 
 Review comment:
   Please keep the USE_MKL_IF_AVAILABLE argument


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215179
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -311,7 +311,7 @@ try {
 bat """mkdir build_vc14_gpu
   call "C:\\Program Files (x86)\\Microsoft Visual Studio 
14.0\\VC\\bin\\x86_amd64\\vcvarsx86_amd64.bat"
   cd build_vc14_gpu
-  cmake -G \"NMake Makefiles JOM\" -DUSE_CUDA=1 -DUSE_CUDNN=1 
-DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All 
-DCMAKE_CXX_FLAGS_RELEASE="/FS /MD /O2 /Ob2 /DNDEBUG" 
-DCMAKE_BUILD_TYPE=Release ${env.WORKSPACE}"""
+  cmake -G \"NMake Makefiles JOM\" -DUSE_CUDA=1 -DUSE_CUDNN=1 
-DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=Maxwell 
-DCMAKE_CXX_FLAGS_RELEASE="/FS /MD /O2 /Ob2 /DNDEBUG" 
-DCMAKE_BUILD_TYPE=Release ${env.WORKSPACE}"""
 
 Review comment:
   Please elaborate this change? We're trying to cover all supported archs


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215222
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -334,6 +334,47 @@ try {
 }
   }
 },
+'Build GPU MKLDNN windows':{
+  node('mxnetwindows-cpu') {
+timeout(time: max_time, unit: 'MINUTES') {
+  ws('workspace/build-gpu') {
+withEnv(['OpenBLAS_HOME=C:\\mxnet\\openblas', 
'OpenCV_DIR=C:\\mxnet\\opencv_vc14', 
'CUDA_PATH=C:\\CUDA\\v8.0','BUILD_NAME=vc14_gpu_mkldnn']) {
+init_git_win()
+bat """mkdir build_%BUILD_NAME%
+  call "C:\\Program Files (x86)\\Microsoft Visual Studio 
14.0\\VC\\bin\\x86_amd64\\vcvarsx86_amd64.bat"
+  cd build_%BUILD_NAME%
+  copy 
${env.WORKSPACE}\\3rdparty\\mkldnn\\config_template.vcxproj.user 
${env.WORKSPACE}\\config_template.vcxproj.user /y
+  cmake -G \"NMake Makefiles JOM\" -DUSE_CUDA=1 -DUSE_CUDNN=1 
-DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=Maxwell -DUSE_MKLDNN=1 
-DCMAKE_CXX_FLAGS_RELEASE="/FS /MD /O2 /Ob2 /DNDEBUG" 
-DCMAKE_BUILD_TYPE=Release ${env.WORKSPACE}"""
+bat '''
+call "C:\\Program Files (x86)\\Microsoft Visual Studio 
14.0\\VC\\bin\\x86_amd64\\vcvarsx86_amd64.bat"
+cd build_%BUILD_NAME%
+set /a cores=36 * 2
+jom -j 72
 
 Review comment:
   Please use the proper env variables and don't hardcode these values


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215251
 
 

 ##
 File path: cmake/FirstClassLangCuda.cmake
 ##
 @@ -125,8 +125,8 @@ endif ()
 #   mshadow_select_nvcc_arch_flags(out_variable)
 function(mshadow_select_nvcc_arch_flags out_variable)
 
-  set(CUDA_ARCH_LIST "Auto" CACHE STRING "Select target NVIDIA GPU 
achitecture.")
-  set_property( CACHE CUDA_ARCH_LIST PROPERTY STRINGS "" "All" "Common" 
${CUDA_KNOWN_GPU_ARCHITECTURES} )
+  set(CUDA_ARCH_LIST "" CACHE STRING "Select target NVIDIA GPU achitecture.")
 
 Review comment:
   default behaviour change?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215285
 
 

 ##
 File path: python/mxnet/libinfo.py
 ##
 @@ -37,6 +38,8 @@ def find_lib_path():
 logging.warning("MXNET_LIBRARY_PATH should be an absolute 
path, instead of: %s",
 lib_from_env)
 else:
+if os.name == 'nt':
+os.environ['PATH'] = os.path.dirname(lib_from_env) + ';' + 
os.environ['PATH']
 
 Review comment:
   Please append to the end of path instead of the beginning to prevent path 
masking


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
marcoabreu commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183215290
 
 

 ##
 File path: python/mxnet/libinfo.py
 ##
 @@ -69,6 +72,8 @@ def find_lib_path():
 if len(lib_path) == 0:
 raise RuntimeError('Cannot find the MXNet library.\n' +
'List of candidates:\n' + str('\n'.join(dll_path)))
+if os.name == 'nt':
+os.environ['PATH'] = os.path.dirname(lib_path[0]) + ';' + 
os.environ['PATH']
 
 Review comment:
   same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] barkntuncer closed issue #10636: How to install mxnet for CUDA8.0 properly? mxnet-cu80 error

2018-04-21 Thread GitBox
barkntuncer closed issue #10636: How to install mxnet for CUDA8.0 properly? 
mxnet-cu80 error
URL: https://github.com/apache/incubator-mxnet/issues/10636
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] barkntuncer commented on issue #10636: How to install mxnet for CUDA8.0 properly? mxnet-cu80 error

2018-04-21 Thread GitBox
barkntuncer commented on issue #10636: How to install mxnet for CUDA8.0 
properly? mxnet-cu80 error
URL: 
https://github.com/apache/incubator-mxnet/issues/10636#issuecomment-383318396
 
 
   When I reinstall NVIDIA driver, it fixed


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] haojin2 commented on issue #10633: [MXNET-346] Hard Sigmoid Operator

2018-04-21 Thread GitBox
haojin2 commented on issue #10633: [MXNET-346] Hard Sigmoid Operator
URL: https://github.com/apache/incubator-mxnet/pull/10633#issuecomment-383328533
 
 
   @chinakook Thanks for your advice! I'll definitely talk to related persons 
about this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183222432
 
 

 ##
 File path: cmake/FirstClassLangCuda.cmake
 ##
 @@ -125,8 +125,8 @@ endif ()
 #   mshadow_select_nvcc_arch_flags(out_variable)
 function(mshadow_select_nvcc_arch_flags out_variable)
 
-  set(CUDA_ARCH_LIST "Auto" CACHE STRING "Select target NVIDIA GPU 
achitecture.")
-  set_property( CACHE CUDA_ARCH_LIST PROPERTY STRINGS "" "All" "Common" 
${CUDA_KNOWN_GPU_ARCHITECTURES} )
+  set(CUDA_ARCH_LIST "" CACHE STRING "Select target NVIDIA GPU achitecture.")
 
 Review comment:
   no.still is "Empty".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183222490
 
 

 ##
 File path: Jenkinsfile
 ##
 @@ -311,7 +311,7 @@ try {
 bat """mkdir build_vc14_gpu
   call "C:\\Program Files (x86)\\Microsoft Visual Studio 
14.0\\VC\\bin\\x86_amd64\\vcvarsx86_amd64.bat"
   cd build_vc14_gpu
-  cmake -G \"NMake Makefiles JOM\" -DUSE_CUDA=1 -DUSE_CUDNN=1 
-DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=All 
-DCMAKE_CXX_FLAGS_RELEASE="/FS /MD /O2 /Ob2 /DNDEBUG" 
-DCMAKE_BUILD_TYPE=Release ${env.WORKSPACE}"""
+  cmake -G \"NMake Makefiles JOM\" -DUSE_CUDA=1 -DUSE_CUDNN=1 
-DUSE_NVRTC=1 -DUSE_OPENCV=1 -DUSE_OPENMP=1 -DUSE_PROFILER=1 -DUSE_BLAS=open 
-DUSE_LAPACK=1 -DUSE_DIST_KVSTORE=0 -DCUDA_ARCH_NAME=Maxwell 
-DCMAKE_CXX_FLAGS_RELEASE="/FS /MD /O2 /Ob2 /DNDEBUG" 
-DCMAKE_BUILD_TYPE=Release ${env.WORKSPACE}"""
 
 Review comment:
   This only speeds up compilation speed. Theoretically, it will not affect the 
correctness of programs.
   In fact, even if you use "ALL", it is not a test of "ALL" at the time of 
testing.the NV driver will only choose a suitable one.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
yajiedesign commented on a change in pull request #10629: [MXNET-343]fix Mkldnn 
with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#discussion_r183222432
 
 

 ##
 File path: cmake/FirstClassLangCuda.cmake
 ##
 @@ -125,8 +125,8 @@ endif ()
 #   mshadow_select_nvcc_arch_flags(out_variable)
 function(mshadow_select_nvcc_arch_flags out_variable)
 
-  set(CUDA_ARCH_LIST "Auto" CACHE STRING "Select target NVIDIA GPU 
achitecture.")
-  set_property( CACHE CUDA_ARCH_LIST PROPERTY STRINGS "" "All" "Common" 
${CUDA_KNOWN_GPU_ARCHITECTURES} )
+  set(CUDA_ARCH_LIST "" CACHE STRING "Select target NVIDIA GPU achitecture.")
 
 Review comment:
   no.still is "Empty".


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] xinyu-intel commented on issue #10629: [MXNET-343]fix Mkldnn with msvc

2018-04-21 Thread GitBox
xinyu-intel commented on issue #10629: [MXNET-343]fix Mkldnn with msvc
URL: https://github.com/apache/incubator-mxnet/pull/10629#issuecomment-383351919
 
 
   @yajiedesign @marcoabreu It seems all check passed, I'll modify readme in 
#10613 when this pr merged. Thank you:)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #1356: override global initialization method in layer configuration

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #1356: override global initialization method 
in layer configuration
URL: 
https://github.com/apache/incubator-mxnet/issues/1356#issuecomment-383355423
 
 
   I am trying to load a np array into a Dense layer as weight like this:
   
   ```python
   weights = np.zeros((512,512))
   np.fill_diagonal(weights,1)
   last_layer = gluon.nn.Dense(
   512, 
   use_bias=False, 
   
weight_initializer=mx.initializer.Load({'last_layer_weight':nd.array(weights, 
ctx)}),
   prefix='last_layer_',
   in_units=512
   )
   last_layer.initialize()
   ```
   I get:
   ```
   
   ---
   AssertionErrorTraceback (most recent call last)
in ()
 8 in_units=512
 9 )
   ---> 10 last_layer.initialize()
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py in 
initialize(self, init, ctx, verbose, force_reinit)
   380 Whether to force re-initialization if parameter is 
already initialized.
   381 """
   --> 382 self.collect_params().initialize(init, ctx, verbose, 
force_reinit)
   383 
   384 def hybridize(self, active=True, **kwargs):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, verbose, force_reinit)
   685 init.set_verbosity(verbose=verbose)
   686 for _, v in self.items():
   --> 687 v.initialize(None, ctx, init, force_reinit=force_reinit)
   688 
   689 def zero_grad(self):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, default_init, force_reinit)
   338 
   339 self._deferred_init = (init, ctx, default_init, None)
   --> 340 self._finish_deferred_init()
   341 
   342 def reset_ctx(self, ctx):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
_finish_deferred_init(self)
   240  ctx=context.cpu())
   241 initializer.create(default_init)(
   --> 242 initializer.InitDesc(self.name, {'__init__': 
init}), data)
   243 
   244 self._init_impl(data, ctx)
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/initializer.py in 
__call__(self, desc, arr)
   136 if init:
   137 # when calling Variable initializer
   --> 138 create(init)._init_weight(desc, arr)
   139 self._verbose_print(desc, init, arr)
   140 else:
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/registry.py in create(*args, 
**kwargs)
   147 return create(**name)
   148 
   --> 149 assert isinstance(name, string_types), "%s must be of string 
type"%nickname
   150 
   151 if name.startswith('['):
   
   AssertionError: initializer must be of string type
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #1356: override global initialization method in layer configuration

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #1356: override global initialization method 
in layer configuration
URL: 
https://github.com/apache/incubator-mxnet/issues/1356#issuecomment-383355423
 
 
   I am trying to load a np array into a Dense layer as weight like this:
   
   ```python
   weights = np.zeros((512,512))
   np.fill_diagonal(weights,1)
   last_layer = gluon.nn.Dense(
   512, 
   use_bias=False, 
   
weight_initializer=mx.initializer.Load({'last_layer_weight':nd.array(weights, 
ctx)}),
   prefix='last_layer_',
   in_units=512
   )
   last_layer.initialize()
   ```
   I get:
   ```
   
   ---
   AssertionErrorTraceback (most recent call last)
in ()
 8 in_units=512
 9 )
   ---> 10 last_layer.initialize()
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py in 
initialize(self, init, ctx, verbose, force_reinit)
   380 Whether to force re-initialization if parameter is 
already initialized.
   381 """
   --> 382 self.collect_params().initialize(init, ctx, verbose, 
force_reinit)
   383 
   384 def hybridize(self, active=True, **kwargs):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, verbose, force_reinit)
   685 init.set_verbosity(verbose=verbose)
   686 for _, v in self.items():
   --> 687 v.initialize(None, ctx, init, force_reinit=force_reinit)
   688 
   689 def zero_grad(self):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, default_init, force_reinit)
   338 
   339 self._deferred_init = (init, ctx, default_init, None)
   --> 340 self._finish_deferred_init()
   341 
   342 def reset_ctx(self, ctx):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
_finish_deferred_init(self)
   240  ctx=context.cpu())
   241 initializer.create(default_init)(
   --> 242 initializer.InitDesc(self.name, {'__init__': 
init}), data)
   243 
   244 self._init_impl(data, ctx)
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/initializer.py in 
__call__(self, desc, arr)
   136 if init:
   137 # when calling Variable initializer
   --> 138 create(init)._init_weight(desc, arr)
   139 self._verbose_print(desc, init, arr)
   140 else:
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/registry.py in create(*args, 
**kwargs)
   147 return create(**name)
   148 
   --> 149 assert isinstance(name, string_types), "%s must be of string 
type"%nickname
   150 
   151 if name.startswith('['):
   
   AssertionError: initializer must be of string type
   ```
   
   update:
   To accomplish the following you can use:
   ```
   weights = np.zeros((512,512))
   np.fill_diagonal(weights,1)
   last_layer = gluon.nn.Dense(512, in_units=512, use_bias=False, 
weight_initializer=mx.init.Constant(nd.array(weights)))
   last_layer.collect_params().initialize(ctx=ctx)
   last_layer.params['dense0_weight'].data()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #1356: override global initialization method in layer configuration

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #1356: override global initialization method 
in layer configuration
URL: 
https://github.com/apache/incubator-mxnet/issues/1356#issuecomment-383355423
 
 
   I am trying to load a np array into a Dense layer as weight like this:
   
   ```python
   weights = np.zeros((512,512))
   np.fill_diagonal(weights,1)
   last_layer = gluon.nn.Dense(
   512, 
   use_bias=False, 
   
weight_initializer=mx.initializer.Load({'last_layer_weight':nd.array(weights, 
ctx)}),
   prefix='last_layer_',
   in_units=512
   )
   last_layer.initialize()
   ```
   I get:
   ```
   
   ---
   AssertionErrorTraceback (most recent call last)
in ()
 8 in_units=512
 9 )
   ---> 10 last_layer.initialize()
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/block.py in 
initialize(self, init, ctx, verbose, force_reinit)
   380 Whether to force re-initialization if parameter is 
already initialized.
   381 """
   --> 382 self.collect_params().initialize(init, ctx, verbose, 
force_reinit)
   383 
   384 def hybridize(self, active=True, **kwargs):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, verbose, force_reinit)
   685 init.set_verbosity(verbose=verbose)
   686 for _, v in self.items():
   --> 687 v.initialize(None, ctx, init, force_reinit=force_reinit)
   688 
   689 def zero_grad(self):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
initialize(self, init, ctx, default_init, force_reinit)
   338 
   339 self._deferred_init = (init, ctx, default_init, None)
   --> 340 self._finish_deferred_init()
   341 
   342 def reset_ctx(self, ctx):
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/gluon/parameter.py in 
_finish_deferred_init(self)
   240  ctx=context.cpu())
   241 initializer.create(default_init)(
   --> 242 initializer.InitDesc(self.name, {'__init__': 
init}), data)
   243 
   244 self._init_impl(data, ctx)
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/initializer.py in 
__call__(self, desc, arr)
   136 if init:
   137 # when calling Variable initializer
   --> 138 create(init)._init_weight(desc, arr)
   139 self._verbose_print(desc, init, arr)
   140 else:
   
   ~/anaconda3/lib/python3.6/site-packages/mxnet/registry.py in create(*args, 
**kwargs)
   147 return create(**name)
   148 
   --> 149 assert isinstance(name, string_types), "%s must be of string 
type"%nickname
   150 
   151 if name.startswith('['):
   
   AssertionError: initializer must be of string type
   ```
   
   update:
   To accomplish the following you can use:
   ```python
   last_layer = gluon.nn.Dense(
   512, 
   in_units=512, 
   use_bias=False, 
   weight_initializer=mx.init.Constant(nd.array(np.identity(512)))
   )
   last_layer.collect_params().initialize(ctx=ctx)
   last_layer.params['dense0_weight'].data()
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed issue #10626: Could you help build windows pypi package mxnet-cu90 for version 1.0.0 and version 1.1.0?

2018-04-21 Thread GitBox
szha closed issue #10626: Could you help build windows pypi package mxnet-cu90 
for version 1.0.0 and version 1.1.0?
URL: https://github.com/apache/incubator-mxnet/issues/10626
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil opened a new pull request #10637: [MXNET-352] Document behavior of mx.initializer.Constant

2018-04-21 Thread GitBox
ThomasDelteil opened a new pull request #10637: [MXNET-352] Document behavior 
of mx.initializer.Constant
URL: https://github.com/apache/incubator-mxnet/pull/10637
 
 
   ## Description ##
   As I was looking for a way to initialize the value of my weights to a given 
specific value, I couldn't find any straightforward way to do that without 
hacking into the parameters and setting the value directly. 
   Turns out you can use the mx.initializer.Constant, even though the 
documentation states that it is only use for scalar. If you give a NDArray of 
the right shape, you can initialize your weights using the `Constant` 
initializer, which is much cleaner than setting the data directly since you 
don't have to play around the names of the layers and parameters.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #10628: [MXNET-342] Fix the multi worker Dataloader

2018-04-21 Thread GitBox
piiswrong commented on issue #10628: [MXNET-342] Fix the multi worker Dataloader
URL: https://github.com/apache/incubator-mxnet/pull/10628#issuecomment-383356835
 
 
   this is a generic issue not specific to recordiodataset. Any dataset that 
opens a file could be affected. Does it behave the same way if the Dataset is 
written in python and opens file with `open`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil opened a new issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
ThomasDelteil opened a new issue #10638: [Feature Request] Gluon model zoo 
allow fine-tuning
URL: https://github.com/apache/incubator-mxnet/issues/10638
 
 
   Currently it is a little bit of a faff to fine-tune a model from the 
model-zoo.
   The straight dope takes the option of downloading both the pre-trained and 
non-trained model and assigning the trained features to the non-trained model.
   http://gluon.mxnet.io/chapter08_computer-vision/fine-tuning.html
   As done in the naming tutorial, you can also re assign the `.output` layer 
of the model from the model zoo but that again requires knowledge of the code.
   
   I propose that when calling:
   ```python
   vision.resnet18_v1(pretrained=True, classes=10)
   ```
   instead of getting an error with:
   ```
   AssertionError: Failed loading Parameter 'resnetv10_dense0_weight' from 
saved params: shape incompatible expected (10, 512) vs saved (1000, 512)
   ```
   We get the pre-trained model with the last layer replaced with a 10-unit 
dense layer, which makes more sense from a user perspective.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
szha commented on issue #10638: [Feature Request] Gluon model zoo allow 
fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383357918
 
 
   The semantics of such call might be ambiguous. Given that we already 
consistently name the network parts as "features" and "output", I think an 
easier approach is to just document this structure and make it a convention.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
szha commented on issue #10638: [Feature Request] Gluon model zoo allow 
fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383357918
 
 
   The semantics of such call might be ambiguous. Given that we already 
consistently name the network parts as "features" and "output", I think an 
easier approach is to just document this structure and convention.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10628: [MXNET-342] Fix the multi worker Dataloader

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #10628: [MXNET-342] Fix the multi worker 
Dataloader
URL: https://github.com/apache/incubator-mxnet/pull/10628#issuecomment-383357995
 
 
   It would behave the same way if the dataset relies on reading the file at 
run-time.
   
   To make it clearer, instead of checking the `RecordIODataset`, we could have 
`RecordIODataset` inheriting from a new abstract class `FileReadingDataset` for 
example, that documents the behavior and has an abstract method `reload_file` 
that we call on the worker loop similarly as proposed in this PR.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo 
allow fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383358167
 
 
   I agree that documenting the structure of the network would be very helpful 
to start with. And also documenting the fact that the `classes` argument is 
only valid with `pre_trained=False`, in the docs and by throwing an appropriate 
exception. 
   
   However I don't really see the ambiguity in the semantics? Could you 
elaborate in which case there would be ambiguity?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo 
allow fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383358379
 
 
   Also @szha , I think another common use case is to use a network as a 
featurizer. Currently the only way I found without going to the symbolic 
representation is to replace the output layer with a dense layer without bias 
and with the identity matrix as weight. It would be nice if we could have an 
option to set `Featurizer=True` and it would set the output layer to a 
pass-through `HybridLambda` for example.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] luoyetx commented on issue #10544: name_scope/prefix doesn't work

2018-04-21 Thread GitBox
luoyetx commented on issue #10544: name_scope/prefix doesn't work
URL: 
https://github.com/apache/incubator-mxnet/issues/10544#issuecomment-383358915
 
 
   parameters of `SymbolBlock` is not saved.
   
   ```python
   import mxnet as mx
   from mxnet import gluon as gl, nd
   from mxnet.gluon import nn
   
   
   class Net(gl.HybridBlock):
   def __init__(self):
   super(Net, self).__init__()
   with self.name_scope():
   backbone = gl.model_zoo.vision.resnet50_v1()
   data = mx.sym.var('data')
   featnames = ['stage1_activation2', 'stage2_activation3', 
'stage3_activation5']
   out_names = ['_'.join([backbone.name, featname, 'output']) for 
featname in featnames]
   internals = backbone(data).get_internals()
   outs = [internals[out_name] for out_name in out_names]
   self.backbone = gl.SymbolBlock(outs, data, 
params=backbone.collect_params())
   self.body = nn.Conv2D(3, 1)
   
   def hybrid_forward(self, F, x):
   x = self.body(x)
   return self.backbone(x)
   
   ctx = mx.cpu()
   net = Net()
   net.initialize(mx.init.Normal(), ctx=ctx)
   net.hybridize()
   
   net(nd.random.normal(shape=(1, 3, 224, 224)))
   net.save_params('./test.params')
   for k, v in nd.load('./test.params').items():
   print(k)
   
   for k, v in net.collect_params().items():
   print(k)
   ```
   
   gets
   
   ```
   body.bias
   body.weight
   ```
   
   and
   
   ```
   net5_resnetv10_conv0_weight
   net5_resnetv10_batchnorm0_gamma
   net5_resnetv10_batchnorm0_beta
   net5_resnetv10_stage1_conv0_weight
   net5_resnetv10_stage1_conv0_bias
   net5_resnetv10_stage1_batchnorm0_gamma
   net5_resnetv10_stage1_batchnorm0_beta
   net5_resnetv10_stage1_conv1_weight
   net5_resnetv10_stage1_batchnorm1_gamma
   net5_resnetv10_stage1_batchnorm1_beta
   net5_resnetv10_stage1_conv2_weight
   net5_resnetv10_stage1_conv2_bias
   net5_resnetv10_stage1_batchnorm2_gamma
   net5_resnetv10_stage1_batchnorm2_beta
   net5_resnetv10_stage1_conv3_weight
   net5_resnetv10_stage1_batchnorm3_gamma
   net5_resnetv10_stage1_batchnorm3_beta
   net5_resnetv10_stage1_conv4_weight
   net5_resnetv10_stage1_conv4_bias
   net5_resnetv10_stage1_batchnorm4_gamma
   net5_resnetv10_stage1_batchnorm4_beta
   net5_resnetv10_stage1_conv5_weight
   net5_resnetv10_stage1_batchnorm5_gamma
   net5_resnetv10_stage1_batchnorm5_beta
   net5_resnetv10_stage1_conv6_weight
   net5_resnetv10_stage1_conv6_bias
   net5_resnetv10_stage1_batchnorm6_gamma
   net5_resnetv10_stage1_batchnorm6_beta
   net5_resnetv10_stage1_conv7_weight
   net5_resnetv10_stage1_conv7_bias
   net5_resnetv10_stage1_batchnorm7_gamma
   net5_resnetv10_stage1_batchnorm7_beta
   net5_resnetv10_stage1_conv8_weight
   net5_resnetv10_stage1_batchnorm8_gamma
   net5_resnetv10_stage1_batchnorm8_beta
   net5_resnetv10_stage1_conv9_weight
   net5_resnetv10_stage1_conv9_bias
   net5_resnetv10_stage1_batchnorm9_gamma
   net5_resnetv10_stage1_batchnorm9_beta
   net5_resnetv10_stage2_conv0_weight
   net5_resnetv10_stage2_conv0_bias
   net5_resnetv10_stage2_batchnorm0_gamma
   net5_resnetv10_stage2_batchnorm0_beta
   net5_resnetv10_stage2_conv1_weight
   net5_resnetv10_stage2_batchnorm1_gamma
   net5_resnetv10_stage2_batchnorm1_beta
   net5_resnetv10_stage2_conv2_weight
   net5_resnetv10_stage2_conv2_bias
   net5_resnetv10_stage2_batchnorm2_gamma
   net5_resnetv10_stage2_batchnorm2_beta
   net5_resnetv10_stage2_conv3_weight
   net5_resnetv10_stage2_batchnorm3_gamma
   net5_resnetv10_stage2_batchnorm3_beta
   net5_resnetv10_stage2_conv4_weight
   net5_resnetv10_stage2_conv4_bias
   net5_resnetv10_stage2_batchnorm4_gamma
   net5_resnetv10_stage2_batchnorm4_beta
   net5_resnetv10_stage2_conv5_weight
   net5_resnetv10_stage2_batchnorm5_gamma
   net5_resnetv10_stage2_batchnorm5_beta
   net5_resnetv10_stage2_conv6_weight
   net5_resnetv10_stage2_conv6_bias
   net5_resnetv10_stage2_batchnorm6_gamma
   net5_resnetv10_stage2_batchnorm6_beta
   net5_resnetv10_stage2_conv7_weight
   net5_resnetv10_stage2_conv7_bias
   net5_resnetv10_stage2_batchnorm7_gamma
   net5_resnetv10_stage2_batchnorm7_beta
   net5_resnetv10_stage2_conv8_weight
   net5_resnetv10_stage2_batchnorm8_gamma
   net5_resnetv10_stage2_batchnorm8_beta
   net5_resnetv10_stage2_conv9_weight
   net5_resnetv10_stage2_conv9_bias
   net5_resnetv10_stage2_batchnorm9_gamma
   net5_resnetv10_stage2_batchnorm9_beta
   net5_resnetv10_stage2_conv10_weight
   net5_resnetv10_stage2_conv10_bias
   net5_resnetv10_stage2_batchnorm10_gamma
   net5_resnetv10_stage2_batchnorm10_beta
   net5_resnetv10_stage2_conv11_weight
   net5_resnetv10_stage2_batchnorm11_gamma
   net5_resnetv10_stage2_batchnorm11_beta
   net5_resnetv10_stage2_conv12_weight
   net5_resnetv10_stage2_conv12_bias
   net5_resnetv10_stage2_batchnorm12_gamma
   net5_resnetv10_stage2_batchnorm12_beta
   net5_resnetv10_stage3_conv0_weight
   net5_resnetv10_stage3_con

[GitHub] szha commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
szha commented on issue #10638: [Feature Request] Gluon model zoo allow 
fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383359131
 
 
   In the proposed design, without the knowledge of actual implementation, the 
call may either be interpreted as "get me a pre-trained model and replace the 
output layer with a size of 10", or "get me a pre-trained model that was 
trained on 10 classes".
   
   For using the network as feature extractor, simply use the net.features 
block.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo 
allow fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383358167
 
 
   I agree that documenting the structure of the network would be very helpful 
to start with. And also documenting the fact that the `classes` argument is 
only valid with `pre_trained=False`, in the docs and by throwing an appropriate 
exception. 
   
   However I don't really see the ambiguity in the semantics? Could you 
elaborate in which case there would be ambiguity?
   
   edit: for mobilenet and squeezenet, it is not completely straightforward as 
the last layers are not dense layers, so the method used should be the one 
featured in the straight dope.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo allow fine-tuning

2018-04-21 Thread GitBox
ThomasDelteil commented on issue #10638: [Feature Request] Gluon model zoo 
allow fine-tuning
URL: 
https://github.com/apache/incubator-mxnet/issues/10638#issuecomment-383359513
 
 
   :+1: that's much simpler indeed, thanks. Maybe having a `PreTrained` class 
that documents the `.features` and `.output`, and various usage could be a good 
idea.
   
   I see the ambiguity now, one way to circumvent this ambiguity could be that 
setting specifically the number of classes means getting an untrained last 
layer. But the issue would be that it is not backward compatible with people 
having already set pre_trained=True and num_class=1000 :/ Though we could make 
an exception for these and returning the pre-trained layer.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services