[GitHub] szha commented on issue #9571: ACCURACY IS USING NUMPY, URGENT FIX

2018-02-06 Thread GitBox
szha commented on issue #9571: ACCURACY IS USING NUMPY, URGENT FIX
URL: 
https://github.com/apache/incubator-mxnet/issues/9571#issuecomment-363683905
 
 
   There was report about performance regression on volta 8gpu after switching 
to ndarray, so I'm reverting the ndarray acc change and reopening this issue 
for tracking.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zackchase opened a new issue #9571: ACCURACY IS USING NUMPY, URGENT FIX

2018-02-06 Thread GitBox
zackchase opened a new issue #9571: ACCURACY IS USING NUMPY, URGENT FIX
URL: https://github.com/apache/incubator-mxnet/issues/9571
 
 
   Accuracy is the bottleneck for even a 4000 node neural network. This is a 
practical job that many people will run and which a GPU ***CAN accelerate***.
   
   But right now the accuracy metric is the bottleneck. That's because it 
converts everything to numpy, causing a blocking operation at every iteration. 
Let's fix it stat.
   
 
   ```
   for label, pred_label in zip(labels, preds):
   if pred_label.shape != label.shape:
   pred_label = ndarray.argmax(pred_label, axis=self.axis)
   pred_label = pred_label.asnumpy().astype('int32')
   label = label.asnumpy().astype('int32')
   
   check_label_shapes(label, pred_label)
   
   self.sum_metric += (pred_label.flat == label.flat).sum()
   self.num_inst += len(pred_label.flat)
   ```
   
   Note: Providing complete information in the most concise form is the best 
way to get help. This issue template serves as the checklist for essential 
information to most of the technical issues and bug reports. For non-technical 
issues and feature requests, feel free to present the information in what you 
believe is the best form.
   
   For Q & A and discussion, please start a discussion thread at 
https://discuss.mxnet.io 
   
   ## Description
   (Brief description of the problem in no more than 2 sentences.)
   
   ## Environment info (Required)
   
   ```
   What to do:
   1. Download the diagnosis script from 
https://raw.githubusercontent.com/apache/incubator-mxnet/master/tools/diagnose.py
   2. Run the script using `python diagnose.py` and paste its output here.
   
   ```
   
   Package used (Python/R/Scala/Julia):
   (I'm using ...)
   
   For Scala user, please provide:
   1. Java version: (`java -version`)
   2. Maven version: (`mvn -version`)
   3. Scala runtime if applicable: (`scala -version`)
   
   For R user, please provide R `sessionInfo()`:
   
   ## Build info (Required if built from source)
   
   Compiler (gcc/clang/mingw/visual studio):
   
   MXNet commit hash:
   (Paste the output of `git rev-parse HEAD` here.)
   
   Build config:
   (Paste the content of config.mk, or the build command.)
   
   ## Error Message:
   (Paste the complete error message, including stack trace.)
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1.
   2.
   
   ## What have you tried to solve it?
   
   1.
   2.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha opened a new pull request #9731: revert acc changes

2018-02-06 Thread GitBox
szha opened a new pull request #9731: revert acc changes
URL: https://github.com/apache/incubator-mxnet/pull/9731
 
 
   ## Description ##
   Using ndarray for metric negatively impacts the performance for volta GPU 
and no simple workaround is available, so rolling back the change until 
underlying issue is addressed.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv opened a new pull request #9730: Check padding size for global pooling

2018-02-06 Thread GitBox
TaoLv opened a new pull request #9730: Check padding size for global pooling
URL: https://github.com/apache/incubator-mxnet/pull/9730
 
 
   ## Description ##
   Check padding size for global pooling. #9714 
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Check padding size for pooling, cudnn pooling and pooling_v1
   - add python test case for global pooling with user padding size and stride 
size
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] shuokay closed pull request #9628: kill mxnet using undefault ssh port

2018-02-06 Thread GitBox
shuokay closed pull request #9628: kill mxnet using undefault ssh port
URL: https://github.com/apache/incubator-mxnet/pull/9628
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lxtGH commented on issue #4720: example/ssd error:Operator Scale is not registered

2018-02-06 Thread GitBox
lxtGH commented on issue #4720: example/ssd error:Operator Scale is not 
registered
URL: 
https://github.com/apache/incubator-mxnet/issues/4720#issuecomment-363675286
 
 
   try to compile mxnet with CUDNN=1


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] HuangZhanPeng opened a new issue #9729: how to use the transforms

2018-02-06 Thread GitBox
HuangZhanPeng opened a new issue #9729: how to use the transforms 
URL: https://github.com/apache/incubator-mxnet/issues/9729
 
 
   I take a try the mxnet.gluon.data.vision.transforms, the code is following
   
   `
   from mxnet import gluon
   from mxnet.gluon.data.vision import transforms
   
   train_data = gluon.data.vision.MNIST(train=True, 
transform=transforms.ToTensor())
   test_data = gluon.data.vision.MNIST(train=False, 
transform=transforms.ToTensor())
   
   train_loader = gluon.data.DataLoader(train_data, 100, shuffle=True)
   test_loader = gluon.data.DataLoader(test_data, 100, shuffle=False)
   
   for epoch in range(2):
   for i, data in enumerate(train_loader, 0):
   inputs, labels = data
   print(type(inputs), type(labels))
   `
   
   but it raise following error:
   
   Traceback (most recent call last):
 File "E:/project/GluonZeroToAll/test.py", line 11, in 
   for i, data in enumerate(train_loader, 0):
 File 
"C:\ProgramData\Anaconda3\lib\site-packages\mxnet\gluon\data\dataloader.py", 
line 202, in __iter__
   yield self._batchify_fn([self._dataset[idx] for idx in batch])
 File 
"C:\ProgramData\Anaconda3\lib\site-packages\mxnet\gluon\data\dataloader.py", 
line 202, in 
   yield self._batchify_fn([self._dataset[idx] for idx in batch])
 File 
"C:\ProgramData\Anaconda3\lib\site-packages\mxnet\gluon\data\dataset.py", line 
207, in __getitem__
   return self._transform(self._data[idx], self._label[idx])
 File "C:\ProgramData\Anaconda3\lib\site-packages\mxnet\gluon\block.py", 
line 360, in __call__
   return self.forward(*args)
 File "C:\ProgramData\Anaconda3\lib\site-packages\mxnet\gluon\block.py", 
line 575, in forward
   return self.hybrid_forward(ndarray, x, *args, **params)
   TypeError: hybrid_forward() takes 3 positional arguments but 4 were given
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9492: fix print_summary bug and add groups of convolution

2018-02-06 Thread GitBox
szha commented on a change in pull request #9492: fix print_summary bug and add 
groups of convolution
URL: https://github.com/apache/incubator-mxnet/pull/9492#discussion_r166524435
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -134,17 +134,23 @@ def print_layer_summary(node, out_shape):
 pre_filter = pre_filter + int(shape[0])
 cur_param = 0
 if op == 'Convolution':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 else:
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 cur_param += int(node["attrs"]["num_filter"])
 elif op == 'FullyConnected':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
 
 Review comment:
   That would create an injection point, since the files may not be trusted. 
(e.g. `"no_bias": "import shutil; shutil.rmtree('/')"`)


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yzhliu opened a new pull request #9728: PGP keys add liuyizhi AT apache.org

2018-02-06 Thread GitBox
yzhliu opened a new pull request #9728: PGP keys add liuyizhi AT apache.org
URL: https://github.com/apache/incubator-mxnet/pull/9728
 
 
   ## Description ##
   Add developer PGP key
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] lebeg opened a new pull request #9727: Refactored storage handling

2018-02-06 Thread GitBox
lebeg opened a new pull request #9727: Refactored storage handling
URL: https://github.com/apache/incubator-mxnet/pull/9727
 
 
   ## Description ##
   
   Made Storage::Handle objects and the memory they are handling be reference 
counted.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
 - Unit tests are added
 - Added a functional test
   - [x] Code is well-documented: 
 - Existing docs improved
 - New methods documented
   - [x] To the my best knowledge, examples are not affected by this change
   
   ### Changes ###
   - Ensured that access to memory pointed by storage::Handles is protected by 
reference counting
   - Introduced a common interface to public Storage class and the underlying 
internal StorageManagers
   - Changed referencing of shared memory to a string key
   - Separated platform specific CPUSharedStorageManager implementations
   - Introduced a functional C++ test
   
   - Moved NDArray::Chunk implementation to .cc file for compile time and 
dependendencies reduction
   
   ## Comments ##
   - The change has grown significantly due to a lot of usages in the code.
   
   ## Still to be done in next iterations ##
   - Add Android implementation of CPUSharedStorageManager
   - Introduce locking for shared memory
   - Add coverage and more edge cases to Storage unit testing
   - Add functional test to automation


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] allenxcp opened a new issue #9726: JSONReader: Unknown field attrs, candidates are:

2018-02-06 Thread GitBox
allenxcp opened a new issue #9726: JSONReader: Unknown field attrs, candidates 
are: 
URL: https://github.com/apache/incubator-mxnet/issues/9726
 
 
   @mli 
   
   ## Description
   I train my dataset on ubuntu system with python code.
   I test my model on MAC with c++ code.when I load the xxx.json,some errors 
happened:
   
   WorkSpace/mxnet/dmlc-core/include/dmlc/./json.h:842: JSONReader: Unknown 
field attrs, candidates are: 
   "attr"
   "backward_source_id"
   "control_deps"
   "inputs"
   "name"
   "op"
   "param"
   
   ## Environment info (Required)
   --Python Info--
   ('Version  :', '2.7.10')
   ('Compiler :', 'GCC 4.2.1 Compatible Apple LLVM 9.0.0 (clang-900.0.31)')
   ('Build:', ('default', 'Jul 15 2017 17:16:57'))
   ('Arch :', ('64bit', ''))
   Pip Info---
   ('Version  :', '9.0.1')
   ('Directory:', 
'/Library/Python/2.7/site-packages/pip-9.0.1-py2.7.egg/pip')
   --MXNet Info---
   ('Version  :', '1.0.0')
   ('Directory:', '/Library/Python/2.7/site-packages/mxnet')
   ('Commit Hash   :', '25720d0e3c29232a37e2650f3ba3a2454f9367bb')
   --System Info--
   ('Platform :', 'Darwin-17.4.0-x86_64-i386-64bit')
   ('system   :', 'Darwin')
   ('node :', 'TJiadeMacBook-Pro.local')
   ('release  :', '17.4.0')
   ('version  :', 'Darwin Kernel Version 17.4.0: Sun Dec 17 09:19:54 PST 
2017; root:xnu-4570.41.2~1/RELEASE_X86_64')
   --Hardware Info--
   ('machine  :', 'x86_64')
   ('processor:', 'i386')
   machdep.cpu.brand_string: Intel(R) Core(TM) i5-4278U CPU @ 2.60GHz
   machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE 
MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TM PBE SSE3 PCLMULQDQ 
DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE 
POPCNT AES PCID XSAVE OSXSAVE SEGLIM64 TSCTMR AVX1.0 RDRAND F16C
   machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 
BMI2 INVPCID FPU_CSDS
   machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 1.0592 
sec, LOAD: 1.6985 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0678 sec, LOAD: 
0.5445 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0932 sec, LOAD: 0.5238 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0547 sec, 
LOAD: 0.2792 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.4792 sec, LOAD: 
0.1389 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.4740 sec, LOAD: 
0.4458 sec.
   ```
   
   ## Build info (Required if built from source)
   
   Compiler:
   ubuntu gcc ;
   MAC: clang
   
   MXNet commit hash:
   ubuntu and MAC are the same version 1000
   
   Error:
   ## Error Message:
   /Users/tjia/Desktop/WorkSpace/mxnet/dmlc-core/include/dmlc/logging.h:308: 
[11:41:47] 
/Users/tjia/Desktop/WorkSpace/mxnet/dmlc-core/include/dmlc/./json.h:842: 
JSONReader: Unknown field attrs, candidates are: 
   "attr"
   "backward_source_id"
   "control_deps"
   "inputs"
   "name"
   "op"
   "param"
   
   
   Stack trace returned 10 entries:
   [bt] (0) 0   libmxnet.so 0x0001003a7f18 
_ZN4dmlc15LogMessageFatalD2Ev + 40
   [bt] (1) 1   libmxnet.so 0x000100f04179 
_ZN4dmlc20JSONObjectReadHelper13ReadAllFieldsEPNS_10JSONReaderE + 409
   [bt] (2) 2   libmxnet.so 0x000100f04e6d 
_ZN4dmlc20JSONObjectReadHelper14ReaderFunctionINSt3__16vectorIN4nnvm4pass12_GLOBAL__N_18JSONNodeENS2_9allocatorIS7_EEvPNS_10JSONReaderEPv
 + 1149
   [bt] (3) 3   libmxnet.so 0x000100f04247 
_ZN4dmlc20JSONObjectReadHelper13ReadAllFieldsEPNS_10JSONReaderE + 615
   [bt] (4) 4   libmxnet.so 0x000100f00aa1 
_ZN4nnvm4pass12_GLOBAL__N_18LoadJSONENS_5GraphE + 2033
   [bt] (5) 5   libmxnet.so 0x000100c78b61 
_ZNSt3__128__invoke_void_return_wrapperIN4nnvm5GraphEE6__callIJRPFS2_S2_ES2_EEES2_DpOT_
 + 209
   [bt] (6) 6   libmxnet.so 0x000100c78a52 
_ZNSt3__110__function6__funcIPFN4nnvm5GraphES3_ENS_9allocatorIS5_EES4_EclEOS3_ 
+ 18
   [bt] (7) 7   libmxnet.so 0x000100ecf708 
_ZN4nnvm11ApplyPassesENS_5GraphERKNSt3__16vectorINS1_12basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIcNS6_IS8_
 + 1448
   [bt] (8) 8   libmxnet.so 0x000100b6906b 
_ZN4nnvm9ApplyPassENS_5GraphERKNSt3__112basic_stringIcNS1_11char_traitsIcEENS1_9allocatorIc
 + 187
   [bt] (9) 9   libmxnet.so 0x000100c77112 
_ZN5mxnet18LoadLegacyJSONPassEN4nnvm5GraphE + 418
   
   libc++abi.dylib: ter

[GitHub] ZiyueHuang commented on a change in pull request #9697: fix NAG if multi_precision = true

2018-02-06 Thread GitBox
ZiyueHuang commented on a change in pull request #9697: fix NAG if 
multi_precision = true
URL: https://github.com/apache/incubator-mxnet/pull/9697#discussion_r166505544
 
 

 ##
 File path: tests/python/unittest/test_optimizer.py
 ##
 @@ -357,6 +357,118 @@ def test_std_sparse_sgd():
   w_stype='row_sparse', 
g_stype='row_sparse')
 
 
+class PyNAG(PySGD):
+def __init__(self, **kwargs):
+super(PyNAG, self).__init__(**kwargs)
 
 Review comment:
   I think PySGD in test is appropriate for PyNAG to inherit. In SGD, 
_update_impl is responsible for the actual update (update and 
update_multi_precission both call this function). In PySGD, update is 
responsible for the actual update. So in PyNAG, overriding update function of 
PySGD is ok. This inheritance would make the impletation less verbose in test.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] yifeim opened a new issue #9725: symbol logistic regression 'acc' metric stuck on training

2018-02-06 Thread GitBox
yifeim opened a new issue #9725: symbol logistic regression 'acc' metric stuck 
on training
URL: https://github.com/apache/incubator-mxnet/issues/9725
 
 
   ## Description
   
   With symbol modules, the training accuracy indicator seems unchanged, yet 
the true accuracy does improve. The problem goes away if a custom eval_metric 
is provided.
   
   ## Environment info (Required)
   
   ```
   --Python Info--
   Version  : 3.6.4
   Compiler : GCC 7.2.0
   Build: ('default', 'Jan 16 2018 18:10:19')
   Arch : ('64bit', '')
   Pip Info---
   Version  : 9.0.1
   Directory: 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/pip
   --MXNet Info---
   Version  : 1.0.0
   Directory: 
/home/ubuntu/anaconda3/envs/mxnet_p36/lib/python3.6/site-packages/mxnet
   Commit Hash   : 9ef196909ec7bf9cdda66d5b97c92793109798e1
   --System Info--
   Platform : Linux-4.4.0-1049-aws-x86_64-with-debian-stretch-sid
   system   : Linux
   node : ip-172-31-18-237
   release  : 4.4.0-1049-aws
   version  : #58-Ubuntu SMP Fri Jan 12 23:17:09 UTC 2018
   --Hardware Info--
   machine  : x86_64
   processor: x86_64
   --Network Test--
   Setting timeout: 10
   Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0013 
sec, LOAD: 0.4517 sec.
   Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0006 sec, LOAD: 
0.0210 sec.
   Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0003 sec, LOAD: 
0.0482 sec.
   Timing for FashionMNIST: 
https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz,
 DNS: 0.0003 sec, LOAD: 0.1803 sec.
   Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0094 sec, LOAD: 
0.0403 sec.
   Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0003 sec, 
LOAD: 0.0551 sec.
   ```
   
   Package used (Python/R/Scala/Julia):
   I'm using Python.
   
   ## Minimum reproducible example
   (If you are using your own code, please provide a short script that 
reproduces the error. Otherwise, please provide link to the existing example.)
   
   
   
[mxnet-issue.zip](https://github.com/apache/incubator-mxnet/files/1701492/mxnet-issue.zip)
   
   
   ## Steps to reproduce
   (Paste the commands you ran that produced the error.)
   
   1. Provided both html snapshot and ipynb file.
   
   
   ## What have you tried to solve it?
   
   1. Use custom eval_metric seemed to solve the problem. However, it is still 
useful to find the original cause to be on the safe side.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #9186: How to train model with multi machines

2018-02-06 Thread GitBox
rahul003 commented on issue #9186: How to train model with multi machines
URL: 
https://github.com/apache/incubator-mxnet/issues/9186#issuecomment-363635391
 
 
   Have you installed mxnet by running `sudo python setup.py install` from` 
incubator-mxnet/python/` on each of these machines?
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] GoodJoey commented on issue #9709: what will happen if one of the node reboot when doing the distribute training?

2018-02-06 Thread GitBox
GoodJoey commented on issue #9709: what will happen if one of the node reboot 
when doing the distribute training?
URL: 
https://github.com/apache/incubator-mxnet/issues/9709#issuecomment-363634872
 
 
   got it, thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] GoodJoey closed issue #9709: what will happen if one of the node reboot when doing the distribute training?

2018-02-06 Thread GitBox
GoodJoey closed issue #9709: what will happen if one of the node reboot when 
doing the distribute training?
URL: https://github.com/apache/incubator-mxnet/issues/9709
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on a change in pull request #9697: fix NAG if multi_precision = true

2018-02-06 Thread GitBox
eric-haibin-lin commented on a change in pull request #9697: fix NAG if 
multi_precision = true
URL: https://github.com/apache/incubator-mxnet/pull/9697#discussion_r166502330
 
 

 ##
 File path: tests/python/unittest/test_optimizer.py
 ##
 @@ -357,6 +357,118 @@ def test_std_sparse_sgd():
   w_stype='row_sparse', 
g_stype='row_sparse')
 
 
+class PyNAG(PySGD):
+def __init__(self, **kwargs):
+super(PyNAG, self).__init__(**kwargs)
 
 Review comment:
   Why inherit from PySGD in test?? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhangguotai opened a new issue #9724: undefined symbol: MXSymbolGetNumOutputs

2018-02-06 Thread GitBox
zhangguotai opened a new issue #9724: undefined symbol: MXSymbolGetNumOutputs
URL: https://github.com/apache/incubator-mxnet/issues/9724
 
 
   I had complished testing and training by python file in pc. Now, I am 
transplanting into arm and executing that import mxnet is ok.
   But I encounter this problem- AttributeError: 
/usr/lib/python2.7/site-packages/mxnet/libmxnet.so: undefined symbol: 
MXSymbolGetNumOutputs
   I don't know how to solve it,


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on issue #9700: Squeeze op

2018-02-06 Thread GitBox
chinakook commented on issue #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#issuecomment-363626265
 
 
   @reminisce  yes, It's equalevant.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] chinakook commented on a change in pull request #9492: fix print_summary bug and add groups of convolution

2018-02-06 Thread GitBox
chinakook commented on a change in pull request #9492: fix print_summary bug 
and add groups of convolution
URL: https://github.com/apache/incubator-mxnet/pull/9492#discussion_r166496299
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -134,17 +134,23 @@ def print_layer_summary(node, out_shape):
 pre_filter = pre_filter + int(shape[0])
 cur_param = 0
 if op == 'Convolution':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 else:
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 cur_param += int(node["attrs"]["num_filter"])
 elif op == 'FullyConnected':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
 
 Review comment:
   I think i can replace bool(bool_str) with built-in function eval(bool_str).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] melody-rain closed issue #7398: inconsistent results when infering

2018-02-06 Thread GitBox
melody-rain closed issue #7398: inconsistent results when infering
URL: https://github.com/apache/incubator-mxnet/issues/7398
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] TaoLv commented on issue #9714: Does global pooling support padding?

2018-02-06 Thread GitBox
TaoLv commented on issue #9714: Does global pooling support padding?
URL: 
https://github.com/apache/incubator-mxnet/issues/9714#issuecomment-363622664
 
 
   @piiswrong OK. I can help to fix that.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.1.0 updated: Update NOTICE (#9706)

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.1.0 by this push:
 new 8cc5e97  Update NOTICE (#9706)
8cc5e97 is described below

commit 8cc5e97b95421692fe20c7c4575db48564761dda
Author: Haibin Lin 
AuthorDate: Tue Feb 6 09:25:50 2018 -0800

Update NOTICE (#9706)
---
 NOTICE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/NOTICE b/NOTICE
index a12b99f..98321cb 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache MXNET (incubating)
-Copyright 2017- The Apache Software Foundation
+Copyright 2017-2018 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | DO NOT MERGE] Model Quantization with Calibration

2018-02-06 Thread GitBox
marcoabreu commented on a change in pull request #9552: [REQUEST FOR REVIEW | 
DO NOT MERGE] Model Quantization with Calibration
URL: https://github.com/apache/incubator-mxnet/pull/9552#discussion_r166487675
 
 

 ##
 File path: src/operator/quantization/quantized_conv.cc
 ##
 @@ -23,6 +23,7 @@
  * \brief
  * \author Ziheng Jiang, Jun Wu
 */
+#ifndef _MSC_VER
 
 Review comment:
   I think we should not offer different functionality in Windows than on other 
systems


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #8570: How to reduce the memory when load the trained model?

2018-02-06 Thread GitBox
szha commented on issue #8570: How to reduce the memory when load the trained 
model?
URL: 
https://github.com/apache/incubator-mxnet/issues/8570#issuecomment-363613199
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #7398: inconsistent results when infering

2018-02-06 Thread GitBox
szha commented on issue #7398: inconsistent results when infering
URL: 
https://github.com/apache/incubator-mxnet/issues/7398#issuecomment-363613203
 
 
   @apache/mxnet-committers: This issue has been inactive for the past 90 days. 
It has no label and needs triage.
   
   For general "how-to" questions, our [user forum](https://discuss.mxnet.io/) 
(and [Chinese version](https://discuss.gluon.ai/)) is a good place to get help.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] thinksanky commented on issue #9548: updated release version to 1.1.0 on the mainpage and re-arranged news?

2018-02-06 Thread GitBox
thinksanky commented on issue #9548: updated release version to 1.1.0 on the 
mainpage and re-arranged news?
URL: https://github.com/apache/incubator-mxnet/pull/9548#issuecomment-363613011
 
 
   ![screen shot 2018-02-06 at 4 17 40 
pm](https://user-images.githubusercontent.com/31976455/35891661-538142e4-0b5a-11e8-864f-d2ed747156f7.png)
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #9700: Squeeze op

2018-02-06 Thread GitBox
reminisce commented on issue #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#issuecomment-363610107
 
 
   @szha Oh, yes. I forgot to do so. I will submit a PR to add the fluent entry 
for this op.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on issue #9700: Squeeze op

2018-02-06 Thread GitBox
szha commented on issue #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#issuecomment-363609719
 
 
   @reminisce could you add a fluent entry for this in both nd and sym?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Squeeze op (#9700)

2018-02-06 Thread jxie
This is an automated email from the ASF dual-hosted git repository.

jxie pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new c19f506  Squeeze op (#9700)
c19f506 is described below

commit c19f506a021feb1bd087423b59c1db11d3c4cd08
Author: reminisce 
AuthorDate: Tue Feb 6 16:07:08 2018 -0800

Squeeze op (#9700)

* Add squeeze op

* Add unit test

* Fix lint

* User IdentityCompute directly
---
 src/operator/tensor/matrix_op-inl.h| 67 ++
 src/operator/tensor/matrix_op.cc   | 39 
 src/operator/tensor/matrix_op.cu   |  7 
 tests/python/unittest/test_operator.py | 34 +
 4 files changed, 147 insertions(+)

diff --git a/src/operator/tensor/matrix_op-inl.h 
b/src/operator/tensor/matrix_op-inl.h
index c1ecc06..38ddf2c 100644
--- a/src/operator/tensor/matrix_op-inl.h
+++ b/src/operator/tensor/matrix_op-inl.h
@@ -1834,6 +1834,73 @@ void StackOpBackward(const nnvm::NodeAttrs& attrs,
   })
 }
 
+struct SqueezeParam : public dmlc::Parameter {
+  dmlc::optional axis;
+  DMLC_DECLARE_PARAMETER(SqueezeParam) {
+DMLC_DECLARE_FIELD(axis)
+.set_default(dmlc::optional())
+.describe("Selects a subset of the single-dimensional entries in the 
shape."
+  " If an axis is selected with shape entry greater than one, an 
error is raised.");
+  }
+};
+
+// Given a shape that may have dim size equal to 0,
+// move all the zeros to the last of the shape array
+// and keep the relative order of the non-zero values.
+// Returns the new shape size after moving all zeros to the end.
+inline uint32_t SqueezeShapeHelper(TShape* shape) {
+  CHECK(shape != nullptr);
+  uint32_t count = 0;
+  for (uint32_t i = 0; i < shape->ndim(); ++i) {
+if ((*shape)[i] == 0) {
+  ++count;
+} else {
+  std::swap((*shape)[i], (*shape)[i-count]);
+}
+  }
+  return shape->ndim() - count;
+}
+
+inline bool SqueezeShape(const nnvm::NodeAttrs& attrs,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const SqueezeParam& param = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), 1U) << "Input: [data]";
+  CHECK_EQ(out_attrs->size(), 1U);
+  const TShape& dshape = in_attrs->at(0);
+  const int dndim = dshape.ndim();
+  if (shape_is_none(dshape)) return false;
+  TShape oshape = dshape;
+  if (param.axis.has_value()) {
+// preprocess axis
+TShape axes = param.axis.value();
+for (uint32_t i = 0; i < axes.ndim(); ++i) {
+  if (axes[i] < 0) {
+axes[i] += dndim;
+CHECK_GE(axes[i], 0)
+  << "axis " << axes[i] - dndim << " is out of bounds for array of 
dimension " << dndim;
+  }
+  CHECK_LT(axes[i], dndim)
+<< "axis " << axes[i] << " is out of bounds for array of dimension " 
<< dndim;
+  CHECK_EQ(dshape[axes[i]], 1)
+<< "cannot select an axis to squeeze out which has size="
+<< dshape[axes[i]] << " not equal to one";
+  CHECK_NE(oshape[axes[i]], 0) << "duplicate value in axis";
+  oshape[axes[i]] = 0;
+}
+  } else {
+for (uint32_t i = 0; i < oshape.ndim(); ++i) {
+  if (oshape[i] == 1) oshape[i] = 0;
+}
+  }
+  uint32_t oshape_size = SqueezeShapeHelper(&oshape);
+  if (oshape_size == 0) {  // corner case when dshape is (1, 1, 1, 1)
+oshape[0] = 1;
+oshape_size = 1;
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(oshape.data(), 
oshape.data()+oshape_size));
+  return true;
+}
 
 }  // namespace op
 }  // namespace mxnet
diff --git a/src/operator/tensor/matrix_op.cc b/src/operator/tensor/matrix_op.cc
index 9167fcf..7439fb4 100644
--- a/src/operator/tensor/matrix_op.cc
+++ b/src/operator/tensor/matrix_op.cc
@@ -97,6 +97,7 @@ DMLC_REGISTER_PARAMETER(RepeatParam);
 DMLC_REGISTER_PARAMETER(TileParam);
 DMLC_REGISTER_PARAMETER(ReverseParam);
 DMLC_REGISTER_PARAMETER(StackParam);
+DMLC_REGISTER_PARAMETER(SqueezeParam);
 
 NNVM_REGISTER_OP(Reshape)
 .add_alias("reshape")
@@ -739,5 +740,43 @@ NNVM_REGISTER_OP(_backward_stack)
 .set_attr("TIsBackward", true)
 .set_attr("FCompute", StackOpBackward);
 
+NNVM_REGISTER_OP(squeeze)
+.describe(R"code(Remove single-dimensional entries from the shape of an array.
+Same behavior of defining the output tensor shape as numpy.squeeze for the 
most of cases.
+See the following note for exception.
+
+Examples::
+
+  data = [[[0], [1], [2]]]
+  squeeze(data) = [0, 1, 2]
+  squeeze(data, axis=0) = [[0], [1], [2]]
+  squeeze(data, axis=2) = [[0, 1, 2]]
+  squeeze(data, axis=(0, 2)) = [0, 1, 2]
+
+.. Note::
+  The output of this operator will keep at least one dimension not removed. 
For example,
+  squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a 
scalar.
+)code")
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](cons

[GitHub] piiswrong closed pull request #9700: Squeeze op

2018-02-06 Thread GitBox
piiswrong closed pull request #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/src/operator/tensor/matrix_op-inl.h 
b/src/operator/tensor/matrix_op-inl.h
index c1ecc06d4e..38ddf2c2d3 100644
--- a/src/operator/tensor/matrix_op-inl.h
+++ b/src/operator/tensor/matrix_op-inl.h
@@ -1834,6 +1834,73 @@ void StackOpBackward(const nnvm::NodeAttrs& attrs,
   })
 }
 
+struct SqueezeParam : public dmlc::Parameter {
+  dmlc::optional axis;
+  DMLC_DECLARE_PARAMETER(SqueezeParam) {
+DMLC_DECLARE_FIELD(axis)
+.set_default(dmlc::optional())
+.describe("Selects a subset of the single-dimensional entries in the 
shape."
+  " If an axis is selected with shape entry greater than one, an 
error is raised.");
+  }
+};
+
+// Given a shape that may have dim size equal to 0,
+// move all the zeros to the last of the shape array
+// and keep the relative order of the non-zero values.
+// Returns the new shape size after moving all zeros to the end.
+inline uint32_t SqueezeShapeHelper(TShape* shape) {
+  CHECK(shape != nullptr);
+  uint32_t count = 0;
+  for (uint32_t i = 0; i < shape->ndim(); ++i) {
+if ((*shape)[i] == 0) {
+  ++count;
+} else {
+  std::swap((*shape)[i], (*shape)[i-count]);
+}
+  }
+  return shape->ndim() - count;
+}
+
+inline bool SqueezeShape(const nnvm::NodeAttrs& attrs,
+ std::vector *in_attrs,
+ std::vector *out_attrs) {
+  const SqueezeParam& param = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_attrs->size(), 1U) << "Input: [data]";
+  CHECK_EQ(out_attrs->size(), 1U);
+  const TShape& dshape = in_attrs->at(0);
+  const int dndim = dshape.ndim();
+  if (shape_is_none(dshape)) return false;
+  TShape oshape = dshape;
+  if (param.axis.has_value()) {
+// preprocess axis
+TShape axes = param.axis.value();
+for (uint32_t i = 0; i < axes.ndim(); ++i) {
+  if (axes[i] < 0) {
+axes[i] += dndim;
+CHECK_GE(axes[i], 0)
+  << "axis " << axes[i] - dndim << " is out of bounds for array of 
dimension " << dndim;
+  }
+  CHECK_LT(axes[i], dndim)
+<< "axis " << axes[i] << " is out of bounds for array of dimension " 
<< dndim;
+  CHECK_EQ(dshape[axes[i]], 1)
+<< "cannot select an axis to squeeze out which has size="
+<< dshape[axes[i]] << " not equal to one";
+  CHECK_NE(oshape[axes[i]], 0) << "duplicate value in axis";
+  oshape[axes[i]] = 0;
+}
+  } else {
+for (uint32_t i = 0; i < oshape.ndim(); ++i) {
+  if (oshape[i] == 1) oshape[i] = 0;
+}
+  }
+  uint32_t oshape_size = SqueezeShapeHelper(&oshape);
+  if (oshape_size == 0) {  // corner case when dshape is (1, 1, 1, 1)
+oshape[0] = 1;
+oshape_size = 1;
+  }
+  SHAPE_ASSIGN_CHECK(*out_attrs, 0, TShape(oshape.data(), 
oshape.data()+oshape_size));
+  return true;
+}
 
 }  // namespace op
 }  // namespace mxnet
diff --git a/src/operator/tensor/matrix_op.cc b/src/operator/tensor/matrix_op.cc
index 9167fcfe7e..7439fb49d8 100644
--- a/src/operator/tensor/matrix_op.cc
+++ b/src/operator/tensor/matrix_op.cc
@@ -97,6 +97,7 @@ DMLC_REGISTER_PARAMETER(RepeatParam);
 DMLC_REGISTER_PARAMETER(TileParam);
 DMLC_REGISTER_PARAMETER(ReverseParam);
 DMLC_REGISTER_PARAMETER(StackParam);
+DMLC_REGISTER_PARAMETER(SqueezeParam);
 
 NNVM_REGISTER_OP(Reshape)
 .add_alias("reshape")
@@ -739,5 +740,43 @@ NNVM_REGISTER_OP(_backward_stack)
 .set_attr("TIsBackward", true)
 .set_attr("FCompute", StackOpBackward);
 
+NNVM_REGISTER_OP(squeeze)
+.describe(R"code(Remove single-dimensional entries from the shape of an array.
+Same behavior of defining the output tensor shape as numpy.squeeze for the 
most of cases.
+See the following note for exception.
+
+Examples::
+
+  data = [[[0], [1], [2]]]
+  squeeze(data) = [0, 1, 2]
+  squeeze(data, axis=0) = [[0], [1], [2]]
+  squeeze(data, axis=2) = [[0, 1, 2]]
+  squeeze(data, axis=(0, 2)) = [0, 1, 2]
+
+.. Note::
+  The output of this operator will keep at least one dimension not removed. 
For example,
+  squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a 
scalar.
+)code")
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"data"};
+  })
+.set_attr("FInferShape", SqueezeShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", UnaryOp::IdentityCompute)
+.set_attr("FGradient", 
ElemwiseGradUseNone{"_backward_squeeze"})
+.add_argument("data", "NDArray-or-Symbol[]", "data to squeeze")
+.add_arguments(StackParam::__FIELDS__());
+
+NNVM_REGISTER_OP(_backward_squeeze)
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_pars

[GitHub] marcoabreu commented on issue #9723: Specify lint-versions and fix docker build issues due to nvidia-384

2018-02-06 Thread GitBox
marcoabreu commented on issue #9723: Specify lint-versions and fix docker build 
issues due to nvidia-384
URL: https://github.com/apache/incubator-mxnet/pull/9723#issuecomment-363607686
 
 
   @cjolivier01 @reminisce @zheng-da As soon as this PR is merged, you will 
again be able to make changes to the dockerfiles without failing with the 
nvidia-384 error.
   
   @szha @eric-haibin-lin 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu opened a new pull request #9723: Specify lint-versions and fix docker build issues due to nvidia-384

2018-02-06 Thread GitBox
marcoabreu opened a new pull request #9723: Specify lint-versions and fix 
docker build issues due to nvidia-384
URL: https://github.com/apache/incubator-mxnet/pull/9723
 
 
   ## Description ##
   Specify and fix pylint as well as cpplint versions in order to provide a 
consistent behaviour. Otherwise, new slaves (having no docker cache present) 
might use a newer version of linting and produce different results. Linting 
reports for the latest version have been fixed in #9660.
   
   Due to Nvidias update regarding the spectre vulnerability, we were not able 
to use the nvidia-384 package inside our Dockerfiles. Instead, we're now 
switching to the cuda-8-0 package, which matches our general CI setup to 
support CUDA 8.0.
   
   ## Checklist ##
   ### Essentials ###
   - [x] Passed code style checking (`make lint`)
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [x] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - Set CppLint to 1.3.0
   - Set PyLint to 1.8.2
   - Get CUDA stubs in build_cuda from cuda-8-0 instead of nvidia-384.
   
   ## Comments ##
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin opened a new pull request #9722: Improve sparse.retain performance on CPU

2018-02-06 Thread GitBox
eric-haibin-lin opened a new pull request #9722: Improve sparse.retain 
performance on CPU 
URL: https://github.com/apache/incubator-mxnet/pull/9722
 
 
   ## Description ##
   This PR improves the performance of `sparse.retain` operator on CPU. The 
previous implementation uses a single thread to copy the retained data, which 
usually doesn't saturate the memory bandwidth. Now using kernel launch to 
utilize multiple threads.
   @reminisce 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   
   Benchmark script: 
   ```
   import mxnet as mx
   import numpy as np
   import time
   
   mx.random.seed(1)
   np.random.seed(1)
   num_rows = 1024*770
   idx = np.random.randint(low=0, high=num_rows-1, size=9*1024)
   sorted_idx = np.unique(idx)
   print(sorted_idx.shape)
   
   idx_nd = mx.nd.array(idx, dtype=np.int64)
   data = mx.nd.ones((num_rows, 1024)).tostype('row_sparse')
   
   mx.nd.waitall()
   a = time.time()
   for i in range(1):
   out = mx.nd.sparse.retain(data=data, indices=idx_nd)
   mx.nd.waitall()
   b = time.time()
   print('warm up time', b - a)
   mx.nd.waitall()
   c = time.time()
   for i in range(1000):
   out = mx.nd.sparse.retain(data=data, indices=idx_nd)
   mx.nd.waitall()
   d = time.time()
   print('elapsed time', d - c)
   ```
   Benchmark result on p2.8xlarge:
   
   | Experiment| Elapsed Time (s)   | Speedup  |
   | - |:-:| -:|
   | Baseline(single thread copy) | 10.0100450516 | 1 |
   | Parallel w/ OMP_NUM_THREADS=2 | 7.74782586098 | 1.29x |
   | Parallel w/ OMP_NUM_THREADS=4 | 5.00379800797 |  2.00x |
   | Parallel w/ OMP_NUM_THREADS=6 | 4.04206800461 | 2.47x |
   | Parallel w/ OMP_NUM_THREADS=10 | 4.05241584778 | 2.47x |
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9688: bilinear upsample from PyTorch

2018-02-06 Thread GitBox
marcoabreu commented on issue #9688: bilinear upsample from PyTorch
URL: https://github.com/apache/incubator-mxnet/pull/9688#issuecomment-363605517
 
 
   Quick note: You can also run "make lint" to test the linting locally.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9616: Removing a broken tutorial from the nightly tests

2018-02-06 Thread GitBox
marcoabreu commented on issue #9616: Removing a broken tutorial from the 
nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/9616#issuecomment-363604369
 
 
   Well, but this is still a valid issue, isn't it? I don't see any reason why 
this should be closed - otherwise, we'll have the same problems at a later 
point in time.
   
   Would you mind reopening?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #9616: Removing a broken tutorial from the nightly tests

2018-02-06 Thread GitBox
aaronmarkham commented on issue #9616: Removing a broken tutorial from the 
nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/9616#issuecomment-363603705
 
 
   I'm going to close this as it's only messing with the nightly's, it isn't 
critical, and the CI/tests stuff is being overhauled anyway.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham closed pull request #9616: Removing a broken tutorial from the nightly tests

2018-02-06 Thread GitBox
aaronmarkham closed pull request #9616: Removing a broken tutorial from the 
nightly tests
URL: https://github.com/apache/incubator-mxnet/pull/9616
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/tests/nightly/test_tutorial_config.txt 
b/tests/nightly/test_tutorial_config.txt
index 428309b84c..8ea7e7a8fd 100644
--- a/tests/nightly/test_tutorial_config.txt
+++ b/tests/nightly/test_tutorial_config.txt
@@ -4,4 +4,3 @@ basic/module
 basic/data
 python/linear-regression
 python/mnist
-python/predict_image


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 opened a new issue #9721: Feature Request gradcheck for Gluon and NDArray

2018-02-06 Thread GitBox
zhanghang1989 opened a new issue #9721: Feature Request gradcheck for Gluon and 
NDArray
URL: https://github.com/apache/incubator-mxnet/issues/9721
 
 
   A gradcheck is desirable similar to PyTorch 
https://github.com/pytorch/pytorch/blob/master/torch/autograd/gradcheck.py#L215
   
   Current `check_numeric_gradient()` seems only support symbolic API.
   
https://github.com/apache/incubator-mxnet/blob/master/python/mxnet/test_utils.py#L794
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9648: BatchNorm Evaluation Mode Backward Fails with cudnn Enabled

2018-02-06 Thread GitBox
zhanghang1989 commented on issue #9648: BatchNorm Evaluation Mode Backward 
Fails with cudnn Enabled
URL: 
https://github.com/apache/incubator-mxnet/issues/9648#issuecomment-363600182
 
 
   @eric-haibin-lin I think this should be labeled as `bug`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.1.0 updated: Update NEWS.md

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/v1.1.0 by this push:
 new 31104c9  Update NEWS.md
31104c9 is described below

commit 31104c9d4b050883467f45f8bf9a164acb93976f
Author: Haibin Lin 
AuthorDate: Tue Feb 6 15:22:53 2018 -0800

Update NEWS.md
---
 NEWS.md | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/NEWS.md b/NEWS.md
index 920063a..a51b514 100644
--- a/NEWS.md
+++ b/NEWS.md
@@ -12,6 +12,7 @@ MXNet Change Log
 - Fixed custom op multi-GPU scaling (#9283)
 - Fixed gradient of gather_nd when duplicate entries exist in index. (#9200)
 - Fixed overriden contexts in Module `group2ctx` option when using multiple 
contexts (#8867)
+- Fixed `swap_axes` operator with "add_to" gradient req (#9541)
 ### New Features
 - Added experimental API in `contrib.text` for building vocabulary, and 
loading pre-trained word embeddings, with built-in support for 307 GloVe and 
FastText pre-trained embeddings. (#8763)
 - Added experimental structural blocks in `gluon.contrib`: `Concurrent`, 
`HybridConcurrent`, `Identity`. (#9427)
@@ -26,7 +27,7 @@ MXNet Change Log
 - Added `lazy_update` option for standard `SGD` & `Adam` optimizer with 
`row_sparse` gradients (#9468, #9189)
 - Added `select` option in `Block.collect_params` to support regex (#9348)
 - Added support for (one-to-one and sequence-to-one) inference on explicit 
unrolled RNN models in R (#9022) 
-### Depreciations
+### Deprecations
 - The Scala API name space is still called `ml.dmlc`. The name space is likely 
be changed in a future release to `org.apache` and might brake existing 
applications and scripts (#9579, #9324)
 ### Performance Improvements
 - Improved GPU inference speed by 20% when batch size is 1 (#9055)
@@ -35,6 +36,7 @@ MXNet Change Log
 - Improved batching for GEMM/TRSM operators with large matrices on GPU (#8846)
 ### Known Issues
 - "Predict with pre-trained models" tutorial is broken
+- "example/numpy-ops/ndarray_softmax.py" is broken
 
 For more information and examples, see [full release 
notes](https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.1.0+Release+Notes)
 

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] zhanghang1989 commented on issue #9648: BatchNorm Evaluation Mode Backward Fails with cudnn Enabled

2018-02-06 Thread GitBox
zhanghang1989 commented on issue #9648: BatchNorm Evaluation Mode Backward 
Fails with cudnn Enabled
URL: 
https://github.com/apache/incubator-mxnet/issues/9648#issuecomment-363600182
 
 
   I think this should be labeled as `bug`


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
sxjscience commented on issue #9716: Reduce ndarray size in test which produces 
a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363574081
 
 
   For reference https://github.com/apache/incubator-mxnet/pull/8398/files


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
sxjscience commented on issue #9716: Reduce ndarray size in test which produces 
a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363574081
 
 
   https://github.com/apache/incubator-mxnet/pull/8398/files


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
marcoabreu commented on issue #9716: Reduce ndarray size in test which produces 
a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363571021
 
 
   Do we have the issue documented somewhere?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on a change in pull request #9717: Doc improvement

2018-02-06 Thread GitBox
marcoabreu commented on a change in pull request #9717: Doc improvement
URL: https://github.com/apache/incubator-mxnet/pull/9717#discussion_r166447642
 
 

 ##
 File path: tests/python/unittest/test_ndarray.py
 ##
 @@ -590,9 +590,8 @@ def gt_topk(dat, axis, ret_typ, k, is_ascend):
 gt = gt_topk(a_npy, axis=None, ret_typ="indices", k=5*5*5*5, 
is_ascend=False)
 assert_almost_equal(nd_ret_argsort, gt)
 
-# test topk with a big shape
-a = mx.nd.arange(0, 54686454, step=1, repeat=1)
-assert_almost_equal(a.topk(k=54686454).asnumpy(), a.asnumpy()[::-1])
+a = mx.nd.arange(0, 1024, step=1, repeat=1)
+assert_almost_equal(a.topk(k=1024).asnumpy(), a.asnumpy()[::-1])
 
 Review comment:
   This does not look like a doc improvement. Added accidentally? 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] marcoabreu commented on issue #9705: Added unittest for benchmarking metric performance

2018-02-06 Thread GitBox
marcoabreu commented on issue #9705: Added unittest for benchmarking metric 
performance
URL: https://github.com/apache/incubator-mxnet/pull/9705#issuecomment-363570184
 
 
   Please make sure to use a fixed seed in order to provide reproducibility in 
between different runs.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] sxjscience commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
sxjscience commented on issue #9716: Reduce ndarray size in test which produces 
a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363569949
 
 
   I think it's originally added to test the correctness of sorting a very 
large ndarray.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] drkoller opened a new issue #9720: Caffe converter shortcomings with Crop, Eltwise, Slice layers

2018-02-06 Thread GitBox
drkoller opened a new issue #9720: Caffe converter shortcomings with Crop, 
Eltwise, Slice layers
URL: https://github.com/apache/incubator-mxnet/issues/9720
 
 
   I am converting network models from Caffe to MXNet using the 
`convert_symbol.py` conversion tool. The tool fails to properly convert several 
types of layers:
   
   1) Caffe Crop layer: the converter ignores the "offset" parameters, and sets 
"center_crop=TRUE". Caffe crop does not do crop-to-center, even when no offset 
parameters are specified. Rather, Caffe defaults to offset values of zero.
   
   2) Caffe Eltwise layer: the converter ignores the "operation" parameter, and 
converts any Eltwise layer to `mx.symbol.broadcast_add`, even though the Caffe 
operation may be `PROD` or `MAX`.
   
   3) Caffe Slice layer: unsupported by the converter.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on a change in pull request #9688: bilinear upsample from PyTorch

2018-02-06 Thread GitBox
zhanghang1989 commented on a change in pull request #9688: bilinear upsample 
from PyTorch
URL: https://github.com/apache/incubator-mxnet/pull/9688#discussion_r166434209
 
 

 ##
 File path: src/operator/bilinear_upsample.cc
 ##
 @@ -0,0 +1,185 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+/*!
+ * Copyright (c) 2018 by Contributors
+ * \file bilinear_upsample.cc
+ * \brief bilinear upsample operator
+ * \author Hang Zhang
+ * Adapted from PyTorch
+*/
+#include "devicetensor.h"
+#include "bilinear_upsample-inl.h"
+#include "elemwise_op_common.h"
+
+namespace mxnet {
+namespace op {
+
+
+template
+void SpatialUpSamplingBilinearUpdateOutput(mshadow::Stream *s,
+   const std::vector &input,
+   const std::vector &output) {
+  DeviceTensor itensor = devicetensor(input[0]);
+  DeviceTensor otensor = devicetensor(output[0]);
+  int nbatch = otensor.getSize(0);
+  int channels = otensor.getSize(1);
+  int outputHeight = otensor.getSize(2);
+  int outputWidth = otensor.getSize(3);
+  int inputHeight = itensor.getSize(2);
+  int inputWidth = itensor.getSize(3);
+
+  DType *idata = itensor.data_ptr();
+  DType *odata = otensor.data_ptr();
+  channels = nbatch * channels;
+  // special case: just copy
+  if (inputHeight == outputHeight && inputWidth == outputWidth) {
+for (int h2 = 0; h2 < outputHeight; ++h2) {
+  const int h1 = h2;
+  for (int w2 = 0; w2 < outputWidth; ++w2) {
+const int w1 = w2;
+const DType* pos1 = &idata[h1 * inputWidth + w1];
+DType* pos2 = &odata[h2 * outputWidth + w2];
+for (int c = 0; c < channels; ++c) {
+  pos2[0] = pos1[0];
+  pos1 += inputWidth * inputHeight;
+  pos2 += outputWidth * outputHeight;
+}
+  }
+}
+return;
+  }
+  const float rheight =(outputHeight > 1) ? (float)(inputHeight - 
1)/(outputHeight - 1) : 0.f;
+  const float rwidth = (outputWidth > 1) ? (float)(inputWidth - 1) / 
(outputWidth - 1) : 0.f;
+  for (int h2 = 0; h2 < outputHeight; ++h2) {
+const float h1r = rheight * h2;
+const int h1 = h1r;
+const int h1p = (h1 < inputHeight - 1) ? 1 : 0;
+const DType h1lambda = h1r - h1;
+const DType h0lambda = (DType)1. - h1lambda;
+for (int w2 = 0; w2 < outputWidth; ++w2) {
+  const float w1r = rwidth * w2;
+  const int w1 = w1r;
+  const int w1p = (w1 < inputWidth - 1) ? 1 : 0;
+  const DType w1lambda = w1r - w1;
+  const DType w0lambda = (DType)1. - w1lambda;
+  const DType* pos1 = &idata[h1 * inputWidth + w1];
+  DType* pos2 = &odata[h2 * outputWidth + w2];
+  for (int c = 0; c < channels; ++c) {
+pos2[0] = h0lambda * (w0lambda * pos1[0]+ w1lambda * pos1[w1p])
+  + h1lambda * (w0lambda * pos1[h1p * inputWidth]
+  + w1lambda * pos1[h1p * inputWidth + w1p]);
+pos1 += inputWidth * inputHeight;
+pos2 += outputWidth * outputHeight;
+  }
+}
+  }
+}
+
+
+template
+void SpatialUpSamplingBilinearUpdateGradInput(mshadow::Stream *s,
+  const std::vector &input,
+  const std::vector 
&output) {
+  DeviceTensor gradOutput = devicetensor(input[0]);
+  DeviceTensor gradInput = devicetensor(output[0]);
+  int nbatch = gradInput.getSize(0);
+  int channels = gradInput.getSize(1);
+  int outputHeight = gradInput.getSize(2);
+  int outputWidth = gradInput.getSize(3);
+  int inputHeight = gradOutput.getSize(2);
+  int inputWidth = gradOutput.getSize(3);
+
+  DType *data1 = gradInput.data_ptr();
+  DType *data2 = gradOutput.data_ptr();
+  channels = nbatch * channels;
+
+  // special case: same-size matching grids
+  if (inputHeight == outputHeight && inputWidth == outputWidth) {
+for (int h2 = 0; h2 < outputHeight; ++h2) {
+  const int h1 = h2;
+  for (int w2 = 0; w2 < outputWidth; ++w2) {
+const int w1 = w2;
+DType* pos1 = &data1[h1 * inputWidth + w1];
+const DType* pos2 = &data2[h2 * outputWidth + w2];
+for (int c = 0; c < channels; ++c) {
+  pos1[0] += pos2[0];
+  pos1 += inputWidth * 

[GitHub] zhanghang1989 commented on a change in pull request #9688: bilinear upsample from PyTorch

2018-02-06 Thread GitBox
zhanghang1989 commented on a change in pull request #9688: bilinear upsample 
from PyTorch
URL: https://github.com/apache/incubator-mxnet/pull/9688#discussion_r166433896
 
 

 ##
 File path: src/operator/bilinear_upsample-inl.h
 ##
 @@ -0,0 +1,162 @@
+/*
+ * Licensed to the Apache Software Foundation (ASF) under one
+ * or more contributor license agreements.  See the NOTICE file
+ * distributed with this work for additional information
+ * regarding copyright ownership.  The ASF licenses this file
+ * to you under the Apache License, Version 2.0 (the
+ * "License"); you may not use this file except in compliance
+ * with the License.  You may obtain a copy of the License at
+ *
+ *   http://www.apache.org/licenses/LICENSE-2.0
+ *
+ * Unless required by applicable law or agreed to in writing,
+ * software distributed under the License is distributed on an
+ * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+ * KIND, either express or implied.  See the License for the
+ * specific language governing permissions and limitations
+ * under the License.
+ */
+ /*!
+ * Copyright (c) 2018 by Contributors
+ * \file bilinear_upsample-inl.h
+ * \brief  bilinear upsample operator
+ * \author Hang Zhang
+ */
+#ifndef MXNET_OPERATOR_BILINEAR_SAMPLE_INL_H_
+#define MXNET_OPERATOR_BILINEAR_SAMPLE_INL_H_
+
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include 
+#include "../ndarray/ndarray_function.h"
+#include "./operator_common.h"
+#include "./mxnet_op.h"
+#include "./mshadow_op.h"
+
+namespace mxnet {
+namespace op {
+
+struct BilinearSampleParam : public dmlc::Parameter {
+  int out_height;
 
 Review comment:
   This operator mainly supports fractional scale ratio (input and output sizes 
can be arbitrary), so it is more convenient to use output size instead of 
scale. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen commented on issue #2986: [call for contribution] Improving CPU performance

2018-02-06 Thread GitBox
tqchen commented on issue #2986: [call for contribution] Improving CPU 
performance
URL: 
https://github.com/apache/incubator-mxnet/issues/2986#issuecomment-363547473
 
 
   @Maratyszcza Thanks for pointing it out! I created #9719 for this.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] tqchen opened a new issue #9719: Improve NNPack Binding

2018-02-06 Thread GitBox
tqchen opened a new issue #9719: Improve NNPack Binding
URL: https://github.com/apache/incubator-mxnet/issues/9719
 
 
   Cross ref @Maratyszcza 's comment
   
   - Use new cmake system for latest version
   - Use pre-allocated workspace
   - Caching packing(might need changes in nnvm)
   - Possible fused kernel pattern
   
   The first two items seems to be low hanging fruit and can be done easily.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #8527: send _send_command_to_servers must be string

2018-02-06 Thread GitBox
eric-haibin-lin closed pull request #8527: send _send_command_to_servers must 
be string
URL: https://github.com/apache/incubator-mxnet/pull/8527
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/python/mxnet/kvstore.py b/python/mxnet/kvstore.py
index b2a4beaf93..9eb212d979 100644
--- a/python/mxnet/kvstore.py
+++ b/python/mxnet/kvstore.py
@@ -457,6 +457,8 @@ def set_optimizer(self, optimizer):
 optim_str = py_str(pickle.dumps(optimizer, 0))
 except:
 raise
+if isinstance(optim_str, bytes):
+optim_str = py_str(optim_str)
 self._send_command_to_servers(0, optim_str)
 else:
 self._set_updater(opt.get_updater(optimizer))


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #8527: send _send_command_to_servers must be string

2018-02-06 Thread GitBox
eric-haibin-lin commented on issue #8527: send _send_command_to_servers must be 
string
URL: https://github.com/apache/incubator-mxnet/pull/8527#issuecomment-363544241
 
 
   Thanks for the contribution. But it looks like #8334 already fixed it. 
Closing it for now. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] Maratyszcza commented on issue #2986: [call for contribution] Improving CPU performance

2018-02-06 Thread GitBox
Maratyszcza commented on issue #2986: [call for contribution] Improving CPU 
performance
URL: 
https://github.com/apache/incubator-mxnet/issues/2986#issuecomment-363541857
 
 
   @xmchen1987 @hjk41 @tqchen I looked at NNPACK bindings in MXNet, and they 
have room for improvement:
   - NNPACK now includes CMake configuration scripts for all platforms. It is 
better to use those rather than stick to an old NNPACK version, as NNPACK is 
getting updates and performance improvements.
   - NNPACK supports using pre-allocated workspace buffers provided by the 
framework rather then allocating and de-allocating them inside NNPACK on each 
convolution call. This is a big cost, especially for small convolutions. See 
Maratyszcza/NNPACK#75 for details.
   - NNPACK supports pre-computing transformed coefficients for inference 
use-cases (when weights do not change between forward runs). See 
Maratyszcza/NNPACK#82 for details.
   - NNPACK can do fused Convolution+ReLU at the cost a single convolution 
operation. See `activation` parameter.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #9681: Better Exception Handling for Operators

2018-02-06 Thread GitBox
anirudh2290 commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-363535323
 
 
   ```
   try:
x, y, z = op()
x.asnumpy() #Throws exception, sets exception_ptr to nullptr
   except:
handle_exc()
   y.asnumpy() #exception_ptr is nullptr, doesn't throw
   y = op2(y)  
   y.asnumpy() # y may have garbage values, op2 may execute just fine, 
exception_ptr still nullptr, doesn't throw ?
   ```
   
   @piiswrong As depicted in the example above, if we decide to invalidate 
exception_ptr for y by setting it to nullptr when we WaitToRead x (I am unsure 
how we will do this), then we won't be propagating exceptions down the chain. 
Therefore, the last line here will execute just fine instead of throwing an  
exception, and user will end up with garbage values for y. 
   
   I understand your point that if an op has multiple write vars, and if we 
waited for one of the write vars and re-threw exception, we shouldn't throw it 
again for other vars. But, if we end up invalidating the exception_ptr, any 
continuing operators may or may not fail, and since the exception_ptr is 
invalidated we wouldn't be re-throwing the exception in any of the following 
WaitToReads.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #7450: reporting bugs: pbegin_ <= pend_. Two thread conflicts.

2018-02-06 Thread GitBox
rahul003 commented on issue #7450:  reporting bugs: pbegin_ <= pend_. Two 
thread conflicts.
URL: 
https://github.com/apache/incubator-mxnet/issues/7450#issuecomment-363532983
 
 
   @ptrendx this issue seems to reoccur. We have complaints that users ran into 
this issue in v0.12.1. 
   Were you able to figure out the cause then?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] houkai opened a new issue #7450: reporting bugs: pbegin_ <= pend_. Two thread conflicts.

2018-02-06 Thread GitBox
houkai opened a new issue #7450:  reporting bugs: pbegin_ <= pend_. Two thread 
conflicts.
URL: https://github.com/apache/incubator-mxnet/issues/7450
 
 
   ## Environment info
   Operating System: CentOS release 6.9 (Final)
   
   Compiler: gcc version 4.8.2 (GCC)
   
   Package used (Python/R/Scala/Julia): Python
   
   MXNet version: installed from source: mxnet (0.10.1)
   
   Python version and distribution: Python 2.7.9
   
   ## Error Message:
   [17:16:29] include/dmlc/././logging.h:308: [17:16:29] src/recordio.cc:126: 
Check failed: pbegin_ <= pend_ Invalid RecordIO Format
   
   ## Minimum reproducible example
   During train parse, I config ImageRecordIter in 
'mxnet/example/image-classification/common/data.py' as below:
   ```
   train = mx.io.ImageRecordIter(
   path_imgrec = args.data_train,
   label_width = 1,
   mean_r  = rgb_mean[0],
   mean_g  = rgb_mean[1],
   mean_b  = rgb_mean[2],
   scale   = args.scale,
   data_name   = 'data',
   label_name  = 'softmax_label',
   data_shape  = image_shape,
   batch_size  = args.batch_size,
   rand_crop   = args.random_crop,
   max_random_scale= args.max_random_scale,
   pad = args.pad_size,
   fill_value  = 127,
   min_random_scale= args.min_random_scale,
   max_aspect_ratio= args.max_random_aspect_ratio,
   random_h= args.max_random_h,
   random_s= args.max_random_s,
   random_l= args.max_random_l,
   max_rotate_angle= args.max_random_rotate_angle,
   max_shear_ratio = args.max_random_shear_ratio,
   rand_mirror = args.random_mirror,
   preprocess_threads  = args.data_nthreads,
   shuffle_chunk_size  = 64, 
   shuffle_chunk_seed  = 0,
   shuffle = True,
   num_parts   = nworker,
   part_index  = rank)
   ```
   I want shuffle data in every epoch(use random chunk).
   But , when run fine-tune.py ,I get the error:  pbegin_ <= pend_ Invalid 
RecordIO Format .Sometimes success sometimes fails.
   ```
   python -u fine-tune2.py --gpus '0,1,2,3' \
   --data-train=/home/liubang/mxnet/tools/txu_chen_wei_ban_lian/rec/ \
   --batch-size 128 --num-classes 44 --num-examples=528170 
--lr-step-epochs='2,4,6' \
   --num-epochs=15 --image-shape='3,224,224' --pretrained-model='resnext-101' \
   --load-epoch=0 --lr=3e-5 --lr-factor=0.5 --model-prefix=dress-resnext/tt \
   --data-nthreads=8
   ```
   The parameters in the command line are irrelevant.
   
   ## Analyze
   
   1. In iter_image_recordio_2.cc, Init funciton create source_
   ```
   if (num_shuffle_parts > 1) {
 source_.reset(dmlc::InputSplitShuffle::Create(
 param_.path_imgrec.c_str(), param_.part_index,
 param_.num_parts, "recordio", num_shuffle_parts, 
param_.shuffle_chunk_seed));
   }
   ```
   this will run InputSplit::Create in io.cc(dmlc)
   ```
   #if DMLC_ENABLE_STD_THREAD
 if (spec.cache_file.length() == 0) {
   return new ThreadedInputSplit(split);
 } else {
   return new CachedInputSplit(split, spec.cache_file.c_str());
 }
   #else
 CHECK(spec.cache_file.length() == 0)
 << "to enable cached file, compile with c++11";
 return split;
   #endif
   ```
   if DMLC_ENABLE_STD_THREAD is 1, return ThreadedInputSplit. Notice: here 
create one thread read data from recordio files. In threaded_input_split.h file:
   ```
 explicit ThreadedInputSplit(InputSplitBase *base)
 : buffer_size_(InputSplitBase::kBufferSize),
   base_(base), tmp_chunk_(NULL) {
   iter_.set_max_capacity(2);
   // initalize the iterator
   iter_.Init([this](InputSplitBase::Chunk **dptr) {
   if (*dptr == NULL) {
 *dptr = new InputSplitBase::Chunk(buffer_size_);
   }
   return (*dptr)->Load(base_, buffer_size_);
 },
 [base]() { base->BeforeFirst(); });
 }
   ```
   source_ is the object which contains fs_ in InputSplitBase, fs_ is file 
pointer.
   2. In iter_image_recordio_2.cc, ImageRecordIter2 class will create another 
thread for source_ to get data.
   ```
   virtual void Init(const std::vector 
>& kwargs) {
 prefetch_param_.InitAllowUnknown(kwargs);
 parser_.Init(kwargs);
 // maximum prefetch threaded iter internal size
 const int kMaxPrefetchBuffer = 16;
 // init thread iter
 iter_.set_max_capacity(kMaxPrefetchBuffer);
 // init thread iter
 iter_.Init([this](DataBatch **dptr) {
 if (*dptr == nullptr) {
   *dptr = new DataBatch();
 }
 return parser_.ParseNext(*dptr);
 },
 [this]() { parser_.

[GitHub] rahul003 commented on issue #7450: reporting bugs: pbegin_ <= pend_. Two thread conflicts.

2018-02-06 Thread GitBox
rahul003 commented on issue #7450:  reporting bugs: pbegin_ <= pend_. Two 
thread conflicts.
URL: 
https://github.com/apache/incubator-mxnet/issues/7450#issuecomment-363533150
 
 
   @szha Could you reopen the issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha commented on a change in pull request #9492: fix print_summary bug and add groups of convolution

2018-02-06 Thread GitBox
szha commented on a change in pull request #9492: fix print_summary bug and add 
groups of convolution
URL: https://github.com/apache/incubator-mxnet/pull/9492#discussion_r166411868
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -134,17 +134,23 @@ def print_layer_summary(node, out_shape):
 pre_filter = pre_filter + int(shape[0])
 cur_param = 0
 if op == 'Convolution':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 else:
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 cur_param += int(node["attrs"]["num_filter"])
 elif op == 'FullyConnected':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
 
 Review comment:
   you're right. @chinakook sorry for the misleading comment.
   @piiswrong BTW do you happen to know why we are not using json's built-in 
boolean type?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] rahul003 commented on issue #7450: reporting bugs: pbegin_ <= pend_. Two thread conflicts.

2018-02-06 Thread GitBox
rahul003 commented on issue #7450:  reporting bugs: pbegin_ <= pend_. Two 
thread conflicts.
URL: 
https://github.com/apache/incubator-mxnet/issues/7450#issuecomment-363532983
 
 
   @ptrendx this issue seems to reoccur. Were you able to figure out the cause 
then?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham commented on issue #9718: organized installation instructions to fix render issue

2018-02-06 Thread GitBox
aaronmarkham commented on issue #9718: organized installation instructions to 
fix render issue
URL: https://github.com/apache/incubator-mxnet/pull/9718#issuecomment-363531057
 
 
   @thinksanky - please review and confirm this update. Thanks!


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9713: a fatal error occurred in asynchronous engine operation

2018-02-06 Thread GitBox
piiswrong commented on issue #9713: a fatal error occurred in asynchronous 
engine operation
URL: 
https://github.com/apache/incubator-mxnet/issues/9713#issuecomment-363531006
 
 
   This looks like out of memory. Please try smaller batch sizes


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9710: If a relu layer can be shared by more than one convolution layers in mxnet

2018-02-06 Thread GitBox
piiswrong commented on issue #9710: If a relu layer can be shared by more than 
one convolution layers in mxnet
URL: 
https://github.com/apache/incubator-mxnet/issues/9710#issuecomment-363530652
 
 
   relu layer doesn't have and parameter, so sharing it doesn't make sense. 
Please create a new one each time


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9709: what will happen if one of the node reboot when doing the distribute training?

2018-02-06 Thread GitBox
piiswrong commented on issue #9709: what will happen if one of the node reboot 
when doing the distribute training?
URL: 
https://github.com/apache/incubator-mxnet/issues/9709#issuecomment-363530449
 
 
   All other nodes will stall.
   Restart is not supported yet.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9714: Does global pooling support padding?

2018-02-06 Thread GitBox
piiswrong commented on issue #9714: Does global pooling support padding?
URL: 
https://github.com/apache/incubator-mxnet/issues/9714#issuecomment-363529813
 
 
   Global pooling shouldn't support padding. If it's not checked we need to add 
checking


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
larroy commented on issue #9716: Reduce ndarray size in test which produces a 
huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363528035
 
 
   What was the reason of fail in such a big shape? Anyways, the overal 
situation with the tests on devices requires quite some attention, is not a 
single test problem. Was just a low hanging fruit.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] aaronmarkham opened a new pull request #9718: organized installation instructions to fix render issue

2018-02-06 Thread GitBox
aaronmarkham opened a new pull request #9718: organized installation 
instructions to fix render issue
URL: https://github.com/apache/incubator-mxnet/pull/9718
 
 
   ## Description ##
   For some reason the install page doesn't render properly, so I reorganized 
the divs to be a little cleaner. 
   It renders fine when I build the docs elsewhere: 
http://ec2-34-204-183-27.compute-1.amazonaws.com/
   
   ### Changes ###
   Nested the divs so random languages aren't orphaned...  that could be what 
was causing the rendering to fail.
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9492: fix print_summary bug and add groups of convolution

2018-02-06 Thread GitBox
piiswrong commented on a change in pull request #9492: fix print_summary bug 
and add groups of convolution
URL: https://github.com/apache/incubator-mxnet/pull/9492#discussion_r166405375
 
 

 ##
 File path: python/mxnet/visualization.py
 ##
 @@ -134,17 +134,23 @@ def print_layer_summary(node, out_shape):
 pre_filter = pre_filter + int(shape[0])
 cur_param = 0
 if op == 'Convolution':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 else:
-cur_param = pre_filter * int(node["attrs"]["num_filter"])
+num_group = int(node["attrs"]["num_group"]) if \
+   ("num_group" in node["attrs"]) else 1
+cur_param = (pre_filter * int(node["attrs"]["num_filter"])) \
+   // num_group
 for k in _str2tuple(node["attrs"]["kernel"]):
 cur_param *= int(k)
 cur_param += int(node["attrs"]["num_filter"])
 elif op == 'FullyConnected':
-if ("no_bias" in node["attrs"]) and int(node["attrs"]["no_bias"]):
+if ("no_bias" in node["attrs"]) and node["attrs"]["no_bias"] == 
'True':
 
 Review comment:
   Are you sure about this?
   bool("False") is True


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] zhanghang1989 commented on issue #9688: bilinear upsample from PyTorch

2018-02-06 Thread GitBox
zhanghang1989 commented on issue #9688: bilinear upsample from PyTorch
URL: https://github.com/apache/incubator-mxnet/pull/9688#issuecomment-363525503
 
 
   @chinakook 
   ```
   import mxnet as mx
   x1 = mx.nd.ones(shape=(2,3,4,4))
   y1 = mx.nd.BilinearUpsample2D(x1, out_height=5, out_width=5)
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] ptrendx commented on issue #9705: Added unittest for benchmarking metric performance

2018-02-06 Thread GitBox
ptrendx commented on issue #9705: Added unittest for benchmarking metric 
performance
URL: https://github.com/apache/incubator-mxnet/pull/9705#issuecomment-363525229
 
 
   @safrooze This is the wrong reasoning - the fact that GPU is faster than CPU 
when processing million elements does not mean you should use GPU when adding 2 
numbers together. You should only test on batch sizes that are realistic (and 
do multiple runs to have measurable time difference).


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] anirudh2290 commented on issue #9681: Better Exception Handling for Operators

2018-02-06 Thread GitBox
anirudh2290 commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-363523771
 
 
   ```
   try:
 x, y, z = op()
 x.asnumpy()
   except:
 handle_exc()
   
   '''
   The below just pushes operation to the engine, no guarantee that op2 is 
executed (ExecuteOprBlock may not be called)
   '''
   y = op2(y)
   y.asnumpy() # Guarantees that all the operations that are writing to y are 
executed
   ```
   @KellenSunderland As you mentioned since it is a lazy operation, there is no 
guarantee that operation is executed, just that it is pushed to the engine. So, 
there is no guarantee that ExecuteOprBlock is called for the operator. On the 
other hand, it is guaranteed that all operations which write to a particular 
variable are executed when the blocking call on that variable is made. 
Therefore, I have rethrown exceptions in WaitForVar and WaitForAll. I 
understand that this may not be as intuitive to users as throwing on the 
`y=op2(y)` line itself, but I don't think it is possible to rethrow when it is 
pushed to engine.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
eric-haibin-lin commented on issue #9716: Reduce ndarray size in test which 
produces a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363523643
 
 
   The big shape is intended for the test because the operator fails to handle 
such shape previously. Maybe we can move unit tests with large memory 
consumption to nightly tests?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] DickJC123 commented on issue #9583: use nd for accuracy calculation

2018-02-06 Thread GitBox
DickJC123 commented on issue #9583: use nd for accuracy calculation
URL: https://github.com/apache/incubator-mxnet/pull/9583#issuecomment-363520815
 
 
   We saw the perf regression on a build that includes this PR and also ed823b2 
"proper flatten in acc (#9619)."


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, Swish

2018-02-06 Thread GitBox
piiswrong commented on a change in pull request #9662: Gluon PReLU, ELU, SELU, 
Swish
URL: https://github.com/apache/incubator-mxnet/pull/9662#discussion_r166400037
 
 

 ##
 File path: src/operator/leaky_relu-inl.h
 ##
 @@ -225,7 +242,11 @@ class LeakyReLUProp : public OperatorProperty {
 const TShape &dshape = in_shape->at(leakyrelu::kData);
 if (dshape.ndim() == 0) return false;
 if (param_.act_type == leakyrelu::kPReLU) {
-  in_shape->at(leakyrelu::kGamma) = TShape(Shape1(dshape[1]));
+  const TShape &gshape = in_shape->at(leakyrelu::kGamma);
+  if (gshape.Size() != 1)
 
 Review comment:
   if gshape is empty gshape.ndim() would be 0


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2018-02-06 Thread GitBox
reminisce commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-363517251
 
 
   @larroy Seems mxnet is not properly installed on the device. Could you 
verify? Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on a change in pull request #9700: Squeeze op

2018-02-06 Thread GitBox
reminisce commented on a change in pull request #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#discussion_r166396108
 
 

 ##
 File path: src/operator/tensor/matrix_op.cc
 ##
 @@ -739,5 +740,43 @@ NNVM_REGISTER_OP(_backward_stack)
 .set_attr("TIsBackward", true)
 .set_attr("FCompute", StackOpBackward);
 
+NNVM_REGISTER_OP(squeeze)
+.describe(R"code(Remove single-dimensional entries from the shape of an array.
+Same behavior of defining the output tensor shape as numpy.squeeze for the 
most of cases.
+See the following note for exception.
+
+Examples::
+
+  data = [[[0], [1], [2]]]
+  squeeze(data) = [0, 1, 2]
+  squeeze(data, axis=0) = [[0], [1], [2]]
+  squeeze(data, axis=2) = [[0, 1, 2]]
+  squeeze(data, axis=(0, 2)) = [0, 1, 2]
+
+.. Note::
+  The output of this operator will keep at least one dimension not removed. 
For example,
+  squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a 
scalar.
+)code")
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"data"};
+  })
+.set_attr("FInferShape", SqueezeShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", SqueezeCompute)
 
 Review comment:
   You are right. Changed now. Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9681: Better Exception Handling for Operators

2018-02-06 Thread GitBox
piiswrong commented on issue #9681: Better Exception Handling for Operators
URL: https://github.com/apache/incubator-mxnet/pull/9681#issuecomment-363515561
 
 
   What I meant is, for example:
   ```
   try:
  x, y, z = op()
  x.asnumpy()
   except:
  handle_exc()
   y.asnumpy()  # Fail
   ```
   Currently y.asnumpy() will fail again with the same error as x.asnumpy().
   But a single exception shouldn't be raised twice


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy opened a new pull request #9717: Doc improvement

2018-02-06 Thread GitBox
larroy opened a new pull request #9717: Doc improvement
URL: https://github.com/apache/incubator-mxnet/pull/9717
 
 
   ## Description ##
   Improve take documentation 
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on a change in pull request #9700: Squeeze op

2018-02-06 Thread GitBox
piiswrong commented on a change in pull request #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#discussion_r166392314
 
 

 ##
 File path: src/operator/tensor/matrix_op.cc
 ##
 @@ -739,5 +740,43 @@ NNVM_REGISTER_OP(_backward_stack)
 .set_attr("TIsBackward", true)
 .set_attr("FCompute", StackOpBackward);
 
+NNVM_REGISTER_OP(squeeze)
+.describe(R"code(Remove single-dimensional entries from the shape of an array.
+Same behavior of defining the output tensor shape as numpy.squeeze for the 
most of cases.
+See the following note for exception.
+
+Examples::
+
+  data = [[[0], [1], [2]]]
+  squeeze(data) = [0, 1, 2]
+  squeeze(data, axis=0) = [[0], [1], [2]]
+  squeeze(data, axis=2) = [[0, 1, 2]]
+  squeeze(data, axis=(0, 2)) = [0, 1, 2]
+
+.. Note::
+  The output of this operator will keep at least one dimension not removed. 
For example,
+  squeeze([[[4]]]) = [4], while in numpy.squeeze, the output will become a 
scalar.
+)code")
+.set_num_inputs(1)
+.set_num_outputs(1)
+.set_attr_parser(ParamParser)
+.set_attr("FListInputNames",
+  [](const NodeAttrs& attrs) {
+return std::vector{"data"};
+  })
+.set_attr("FInferShape", SqueezeShape)
+.set_attr("FInferType", ElemwiseType<1, 1>)
+.set_attr("FCompute", SqueezeCompute)
 
 Review comment:
   I think you can use IdentityCompute directly?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] piiswrong commented on issue #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
piiswrong commented on issue #9716: Reduce ndarray size in test which produces 
a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716#issuecomment-363511172
 
 
   @sxjscience I think this was to catch an issue?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch v1.1.0 updated (8b3c9eb -> 7d6fab9)

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 8b3c9eb  Update NEWS.md
 new 832cf82  fixed links that were missng ndarray folder path (#9618)
 new 7d6fab9  Fixed 4 broken links (#9698)

The 2 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 docs/community/contribute.md  | 2 +-
 docs/faq/finetune.md  | 2 +-
 docs/faq/multi_devices.md | 2 +-
 docs/tutorials/index.md   | 4 ++--
 python/mxnet/gluon/trainer.py | 4 ++--
 5 files changed, 7 insertions(+), 7 deletions(-)

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[incubator-mxnet] 01/02: fixed links that were missng ndarray folder path (#9618)

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 832cf82688d9929bf804bd40ef3b4fb94dd2bb56
Author: thinksanky <31976455+thinksa...@users.noreply.github.com>
AuthorDate: Mon Jan 29 14:13:49 2018 -0800

fixed links that were missng ndarray folder path (#9618)
---
 docs/community/contribute.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/docs/community/contribute.md b/docs/community/contribute.md
index 5bb790e..9c3c3e1 100644
--- a/docs/community/contribute.md
+++ b/docs/community/contribute.md
@@ -103,7 +103,7 @@ or is conceptual, add it in the C++ documentation. Make 
sure your example works
 by running a Python version of the example.
   * If a concrete and simple language-specific example can further clarify the 
API and the API arguments, add the
 example in language-specific files.
-* Refer to these examples for guidance:- 
[Embedding](http://mxnet.io/api/python/ndarray.html#mxnet.ndarray.Embedding) , 
[ROIPooling](http://mxnet.io/api/python/ndarray.html#mxnet.ndarray.ROIPooling) 
, [Reshape](http://mxnet.io/api/python/ndarray.html#mxnet.ndarray.Reshape).
+* Refer to these examples for guidance:- 
[Embedding](http://mxnet.io/api/python/ndarray/ndarray.html#mxnet.ndarray.Embedding)
 , 
[ROIPooling](http://mxnet.io/api/python/ndarray/ndarray.html#mxnet.ndarray.ROIPooling)
 , 
[Reshape](http://mxnet.io/api/python/ndarray/ndarray.html#mxnet.ndarray.Reshape).
 
 ### Testing and Rendering
 * Make sure not to break any coding standards. Run

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[incubator-mxnet] 02/02: Fixed 4 broken links (#9698)

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch v1.1.0
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git

commit 7d6fab98824e61c7e05cb08b3554858e1037e8cb
Author: thinksanky <31976455+thinksa...@users.noreply.github.com>
AuthorDate: Tue Feb 6 09:17:06 2018 -0800

Fixed 4 broken links (#9698)

* Fixed 4 broken links

* fixed pylint for long line disable and 1 broken link
---
 docs/faq/finetune.md  | 2 +-
 docs/faq/multi_devices.md | 2 +-
 docs/tutorials/index.md   | 4 ++--
 python/mxnet/gluon/trainer.py | 4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/faq/finetune.md b/docs/faq/finetune.md
index 2c6c7e3..533c3ca 100644
--- a/docs/faq/finetune.md
+++ b/docs/faq/finetune.md
@@ -15,7 +15,7 @@ with these pretrained weights when training on our new task. 
This process is
 commonly called _fine-tuning_. There are a number of variations of fine-tuning.
 Sometimes, the initial neural network is used only as a _feature extractor_.
 That means that we freeze every layer prior to the output layer and simply 
learn
-a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb),
 we explained how to
+a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb),
 we explained how to
 do this kind of feature extraction. Another approach is to update all of
 the network's weights for the new task, and that's the approach we demonstrate 
in
 this document.
diff --git a/docs/faq/multi_devices.md b/docs/faq/multi_devices.md
index 5d538bc..b9cb3ea 100644
--- a/docs/faq/multi_devices.md
+++ b/docs/faq/multi_devices.md
@@ -210,4 +210,4 @@ export PS_VERBOSE=1; python ../../tools/launch.py ...
 ### More
 
 - See more launch options by `python ../../tools/launch.py -h`
-- See more options of 
[ps-lite](http://ps-lite.readthedocs.org/en/latest/faq.html)
+- See more options of [ps-lite](https://ps-lite.readthedocs.io/en/latest)
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index aca091c..3eff299 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -134,7 +134,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Imperative tensor operations on 
CPU/GPU](http://mxnet.incubator.apache.org/tutorials/basic/ndarray.html)
 
-- [NDArray 
Indexing](http://mxnet.incubator.apache.org/tutorials/basic/ndarray_indexing.html)
+- [NDArray Indexing](../tutorials/basic/ndarray_indexing.html)
 
 - [Symbol API](http://mxnet.incubator.apache.org/tutorials/basic/symbol.html)
 
@@ -174,7 +174,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 
 
-- [Connectionist Temporal 
Classification](http://mxnet.incubator.apache.org/tutorials/speech_recognition/ctc.html)
+- [Connectionist Temporal 
Classification](../tutorials/speech_recognition/ctc.html)
 
 - [Distributed key-value 
store](http://mxnet.incubator.apache.org/tutorials/python/kvstore.html)
 
diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 71c144f..c8822bb 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -16,7 +16,7 @@
 # under the License.
 
 # coding: utf-8
-# pylint: disable=
+# pylint: disable=line-too-long
 """Parameter optimizer."""
 __all__ = ['Trainer']
 
@@ -34,7 +34,7 @@ class Trainer(object):
 The set of parameters to optimize.
 optimizer : str or Optimizer
 The optimizer to use. See
-`help 
`_
+`help 
`_
 on Optimizer for a list of available optimizers.
 optimizer_params : dict
 Key-word arguments to be passed to optimizer constructor. For example,

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] larroy opened a new pull request #9716: Reduce ndarray size in test which produces a huge memory spike which ?

2018-02-06 Thread GitBox
larroy opened a new pull request #9716: Reduce ndarray size in test which 
produces a huge memory spike which ?
URL: https://github.com/apache/incubator-mxnet/pull/9716
 
 
   ?crashes on some platforms
   
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   - [ ] Passed code style checking (`make lint`)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] szha closed pull request #9708: Add code signing key

2018-02-06 Thread GitBox
szha closed pull request #9708: Add code signing key
URL: https://github.com/apache/incubator-mxnet/pull/9708
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/KEYS b/KEYS
index d646bb7c3f..5e5769be54 100644
--- a/KEYS
+++ b/KEYS
@@ -363,3 +363,63 @@ 
iEVpHzOV7gd75fJbOvoNxNZj20Yj5sg8OCwbv8PxLXEcBFs7hhjQMhVRsjpNYzAR
 Iw==
 =rMlc
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa4096 2018-01-28 [SC]
+  7302629A6791AC2C3593B9A0015ED8A29C815704
+uid   [ultimate] Haibin Lin (CODE SIGNING KEY) 
+sig 3015ED8A29C815704 2018-01-28  Haibin Lin (CODE SIGNING KEY) 

+sub   rsa4096 2018-01-28 [E]
+sig  015ED8A29C815704 2018-01-28  Haibin Lin (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBFptdRQBEACxk4vidIZ9n14poPFKMayxA0P4o92pboPzYf5rqzgD5cjcVBxk
+uuWqDEEbj3wYCdluTw4sO4jENIBstUY0pIJtuUIsGW9KU14DKsnO+Od6cj/4bAub
+/1/otUJ0D+xDHx3tYEhYEOOOvk1UI7Dd0nxh2K3ymYEZMfki1iMSABwj8Vm0nK2Q
+ZuyswLUbssfZqLaOQ+HuTN54houLqHDuHYz+pyttDH+sxL7c4uJykgJyDx8c0ENW
+0Atejqk8fyKNtVlizcehQff/t7NdKxkpgA3J4ZV2sYvjrD54CiMkhU51At6YO3HX
+L/bPK3deXiNGzHA45mrX8eewgCw92YwdXWQ4OI40smRFm6dBiebXDwfjJkucTMnk
+7RQJSbOE4VezyzwrqKZTHUPBvnwskDXBeNHIdaBwicYkGP8/p1HLvIj0XCC32yHz
+5jaj9qEuTlE1tW0FPpqAFRUNlnVF/wMaDqyV6MdQI9mE6jzzI+8ja9Vi750bs8Ew
+dvyrxf4UcJjc/aMGKcHkxMM6n1aVH/Jl1G7YC8d5K5QXJxuabBc3tp5PI2p6iBdy
+nNpJKmJLKNVCm8rXu0XbSQxoM6QBOF6IlIjExtKXUqKUSs426p81V8dnRCQFg8fP
+Ha7hxYaO2hJHxNx4lIgVgZZj61q5EIpmyNZ4gITkCu6kiGDoBxyruGrlXQARAQAB
+tDFIYWliaW4gTGluIChDT0RFIFNJR05JTkcgS0VZKSA8aGFpYmluQGFwYWNoZS5v
+cmc+iQJOBBMBCAA4FiEEcwJimmeRrCw1k7mgAV7YopyBVwQFAlptdRQCGwMFCwkI
+BwIGFQgJCgsCBBYCAwECHgECF4AACgkQAV7YopyBVwTKwxAAmy+i4ql+pz6tK9P4
+XYYEkRUPqoXKWamoQWukpnVZmZPPuRr3SPCgBUTLOxm6RSTiuFxahHN+zGHBrpNA
+tLv5uyfVS26e3ugjCeZ+NllMLQ7MB+yVDlb7QFOYWDSZ1iTG1kJ1/I038IZJhM5t
+TVAYVICQlUNbi9AI3iHWRzRQswZxFWuuMwTUDsP7yvcIgwMh6keUmNhyRe+GTPFJ
+qwroW+fXbLZ59YqGt/eLvg6kodgia1deBRygjcbAH0B0I8TpcV/IQAXC7Vvji0fB
+fLoCcPaUTTTKInejIrSLkOunooVNbIBHfxpBtl6ilWygkFb1TMfNI8BeXKPPnudk
+2MERTn7poYAS8TJYjLomknrjnaIccQyicLxxs4nh3nC9xvZ2CGr0hmuOv2rM/spj
+/KJzbsdLsFfkyMVPKZR8LALYl1YDsOvAzAmEXtcP3S681sHbfyJbKobn4UmvejCH
+9GHJGm+KlBRSpzKTEt6gqsM5DCKjiSiPomC2XAw7ztqTsf1NCeIDuPniIMANEK9b
+pdS5GpBy2XCvmv7epyamZOHQ57t0//9n3qfH3qDHXFaTMC1EEKvZl7q5Wpx3b+H/
+WkJqCf1cMG0fU/7aPAo2zygNYtPnNqyGYs9RMicrj/lnw7Oz8RiygXPgNLXMkci+
+aTftZm5DKZgWikAYTOhBxV1GEbS5Ag0EWm11FAEQAOBSel/yRYwgxalZfajTv52w
+v61UrZBQMuWNxbGHWdQBZnO0BiijgS+u1AWfpAia1ig+Dqfa5U8w/jqbBG63VvwE
+x8PapVuvXJisxhekGFysQxWf0NCVIY9rTHUs529kN/kbZq2XzWnr4aI6f44YYjEa
+lFAnVL/JJ7ewERbI0XHy3d99LoHYKq9ttc9w4CB2dVN4o5g1wyJxG5uzdNcQO6MP
++QPWPUBkBDIWEWtYeXJVTjuCW9VscFfvgnGSDyBPTeXyN3rup9mu3P1g9PopobkV
+cczTNwSqy4vO+vIYgXUAP98cbbJzE6LZIYEpUPki7ooWIk9MDo3oKCLnJE0TOxCv
+R5ZYyIRJkM5Jtt1RdZvKpLRlRGFTx1uW2pHYJMz2VS+rUPy7NLBcLR1N1LnjOot0
+mb4cE0sJDgT1ONqg79sUGRRBCdda291FomZjjb3UW+mM76h9TSgg8OijTzjQJMmn
+sO/Tx69FMdc1VqJ5nI0SThDwP33EQDthvlobUNrU/mEwI0t3Qsukx+Fi5n/hf3x8
+dInzmCSQ4yLsTZttNTF6+YPDuxuMgTzR0P0e/ilSt576FXjWqWXGtA0noXjEtUim
+j7xXbc4WeKWQjV+jMTIrgxbrE2Cr6x/P+rPGqydpmKH+yNMW4IJs5LWk/SFFHPKM
+liSWetxGdjsxn1aX0h4jABEBAAGJAjYEGAEIACAWIQRzAmKaZ5GsLDWTuaABXtii
+nIFXBAUCWm11FAIbDAAKCRABXtiinIFXBECxEACPTAw0o68QEme78qQXi0ls4yxB
+tVPB4DED3ReGNsnUDzmx0MHmzEUv3vJFfOzpeq/bn5ZxGG70k1HcIUF3c15xz9CK
+A3WpxAxwzRHHIPS+xVN6OQXwilo0+lfKNitQgUMVl9QwG8KgNT1sBCm61c4yzqCV
+aRDzuNLnXJpweClLE/QfjZjudGa41yBAp+XVTF/ke1l4OuWCi9udycfNE0LgmoMS
+uyE2g61oTWyxCfKwdmct30YRkligQ8w80KoW/reBEFURS+KWcMSH8rJaVdv8zdAD
+NRktfLtHgZcq3w1WkVX09PhVQK4HTrFBRHit6BvogRMl2de5ByADCjjVCysNfmWf
+qLHor5+LM2KOTBoOptidG6r9bpKoJNKk0a4evmfXRRe79UoAqcbM3UWZBc48M2qv
+tNzRbAcf0S+ltgw7xEW8rge6Vcz9lLTDuBjC7Mg3m1Q2gQO94RgZifNrmVAF31cY
+iBRNeLQGCrOV/Vt8XhD5Un90dA2aKLrW90IG9houHfKNj3vpRU32Qbb9kLpk1MZY
+fDiLw4372qAC4NpSLRpCIBbT33VztUOTmZgIg4zJiQGSp89dEVN8OUT/yjKQps39
+9XwxzS2A4J/DXuYUkCUD0/FKn7OEf0beXVyOoQItucTGIePSGkIT79uG9qptpxZL
+G4kKPLx5+UhNtHsaNA==
+=ZoTi
+-END PGP PUBLIC KEY BLOCK-


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Update KEYS (#9708)

2018-02-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new a80ea3f  Update KEYS (#9708)
a80ea3f is described below

commit a80ea3fe1c8ed882441d47a9b30a8a5f49e2a8d2
Author: Haibin Lin 
AuthorDate: Tue Feb 6 09:27:02 2018 -0800

Update KEYS (#9708)
---
 KEYS | 60 
 1 file changed, 60 insertions(+)

diff --git a/KEYS b/KEYS
index d646bb7..5e5769b 100644
--- a/KEYS
+++ b/KEYS
@@ -363,3 +363,63 @@ 
iEVpHzOV7gd75fJbOvoNxNZj20Yj5sg8OCwbv8PxLXEcBFs7hhjQMhVRsjpNYzAR
 Iw==
 =rMlc
 -END PGP PUBLIC KEY BLOCK-
+
+pub   rsa4096 2018-01-28 [SC]
+  7302629A6791AC2C3593B9A0015ED8A29C815704
+uid   [ultimate] Haibin Lin (CODE SIGNING KEY) 
+sig 3015ED8A29C815704 2018-01-28  Haibin Lin (CODE SIGNING KEY) 

+sub   rsa4096 2018-01-28 [E]
+sig  015ED8A29C815704 2018-01-28  Haibin Lin (CODE SIGNING KEY) 

+
+-BEGIN PGP PUBLIC KEY BLOCK-
+
+mQINBFptdRQBEACxk4vidIZ9n14poPFKMayxA0P4o92pboPzYf5rqzgD5cjcVBxk
+uuWqDEEbj3wYCdluTw4sO4jENIBstUY0pIJtuUIsGW9KU14DKsnO+Od6cj/4bAub
+/1/otUJ0D+xDHx3tYEhYEOOOvk1UI7Dd0nxh2K3ymYEZMfki1iMSABwj8Vm0nK2Q
+ZuyswLUbssfZqLaOQ+HuTN54houLqHDuHYz+pyttDH+sxL7c4uJykgJyDx8c0ENW
+0Atejqk8fyKNtVlizcehQff/t7NdKxkpgA3J4ZV2sYvjrD54CiMkhU51At6YO3HX
+L/bPK3deXiNGzHA45mrX8eewgCw92YwdXWQ4OI40smRFm6dBiebXDwfjJkucTMnk
+7RQJSbOE4VezyzwrqKZTHUPBvnwskDXBeNHIdaBwicYkGP8/p1HLvIj0XCC32yHz
+5jaj9qEuTlE1tW0FPpqAFRUNlnVF/wMaDqyV6MdQI9mE6jzzI+8ja9Vi750bs8Ew
+dvyrxf4UcJjc/aMGKcHkxMM6n1aVH/Jl1G7YC8d5K5QXJxuabBc3tp5PI2p6iBdy
+nNpJKmJLKNVCm8rXu0XbSQxoM6QBOF6IlIjExtKXUqKUSs426p81V8dnRCQFg8fP
+Ha7hxYaO2hJHxNx4lIgVgZZj61q5EIpmyNZ4gITkCu6kiGDoBxyruGrlXQARAQAB
+tDFIYWliaW4gTGluIChDT0RFIFNJR05JTkcgS0VZKSA8aGFpYmluQGFwYWNoZS5v
+cmc+iQJOBBMBCAA4FiEEcwJimmeRrCw1k7mgAV7YopyBVwQFAlptdRQCGwMFCwkI
+BwIGFQgJCgsCBBYCAwECHgECF4AACgkQAV7YopyBVwTKwxAAmy+i4ql+pz6tK9P4
+XYYEkRUPqoXKWamoQWukpnVZmZPPuRr3SPCgBUTLOxm6RSTiuFxahHN+zGHBrpNA
+tLv5uyfVS26e3ugjCeZ+NllMLQ7MB+yVDlb7QFOYWDSZ1iTG1kJ1/I038IZJhM5t
+TVAYVICQlUNbi9AI3iHWRzRQswZxFWuuMwTUDsP7yvcIgwMh6keUmNhyRe+GTPFJ
+qwroW+fXbLZ59YqGt/eLvg6kodgia1deBRygjcbAH0B0I8TpcV/IQAXC7Vvji0fB
+fLoCcPaUTTTKInejIrSLkOunooVNbIBHfxpBtl6ilWygkFb1TMfNI8BeXKPPnudk
+2MERTn7poYAS8TJYjLomknrjnaIccQyicLxxs4nh3nC9xvZ2CGr0hmuOv2rM/spj
+/KJzbsdLsFfkyMVPKZR8LALYl1YDsOvAzAmEXtcP3S681sHbfyJbKobn4UmvejCH
+9GHJGm+KlBRSpzKTEt6gqsM5DCKjiSiPomC2XAw7ztqTsf1NCeIDuPniIMANEK9b
+pdS5GpBy2XCvmv7epyamZOHQ57t0//9n3qfH3qDHXFaTMC1EEKvZl7q5Wpx3b+H/
+WkJqCf1cMG0fU/7aPAo2zygNYtPnNqyGYs9RMicrj/lnw7Oz8RiygXPgNLXMkci+
+aTftZm5DKZgWikAYTOhBxV1GEbS5Ag0EWm11FAEQAOBSel/yRYwgxalZfajTv52w
+v61UrZBQMuWNxbGHWdQBZnO0BiijgS+u1AWfpAia1ig+Dqfa5U8w/jqbBG63VvwE
+x8PapVuvXJisxhekGFysQxWf0NCVIY9rTHUs529kN/kbZq2XzWnr4aI6f44YYjEa
+lFAnVL/JJ7ewERbI0XHy3d99LoHYKq9ttc9w4CB2dVN4o5g1wyJxG5uzdNcQO6MP
++QPWPUBkBDIWEWtYeXJVTjuCW9VscFfvgnGSDyBPTeXyN3rup9mu3P1g9PopobkV
+cczTNwSqy4vO+vIYgXUAP98cbbJzE6LZIYEpUPki7ooWIk9MDo3oKCLnJE0TOxCv
+R5ZYyIRJkM5Jtt1RdZvKpLRlRGFTx1uW2pHYJMz2VS+rUPy7NLBcLR1N1LnjOot0
+mb4cE0sJDgT1ONqg79sUGRRBCdda291FomZjjb3UW+mM76h9TSgg8OijTzjQJMmn
+sO/Tx69FMdc1VqJ5nI0SThDwP33EQDthvlobUNrU/mEwI0t3Qsukx+Fi5n/hf3x8
+dInzmCSQ4yLsTZttNTF6+YPDuxuMgTzR0P0e/ilSt576FXjWqWXGtA0noXjEtUim
+j7xXbc4WeKWQjV+jMTIrgxbrE2Cr6x/P+rPGqydpmKH+yNMW4IJs5LWk/SFFHPKM
+liSWetxGdjsxn1aX0h4jABEBAAGJAjYEGAEIACAWIQRzAmKaZ5GsLDWTuaABXtii
+nIFXBAUCWm11FAIbDAAKCRABXtiinIFXBECxEACPTAw0o68QEme78qQXi0ls4yxB
+tVPB4DED3ReGNsnUDzmx0MHmzEUv3vJFfOzpeq/bn5ZxGG70k1HcIUF3c15xz9CK
+A3WpxAxwzRHHIPS+xVN6OQXwilo0+lfKNitQgUMVl9QwG8KgNT1sBCm61c4yzqCV
+aRDzuNLnXJpweClLE/QfjZjudGa41yBAp+XVTF/ke1l4OuWCi9udycfNE0LgmoMS
+uyE2g61oTWyxCfKwdmct30YRkligQ8w80KoW/reBEFURS+KWcMSH8rJaVdv8zdAD
+NRktfLtHgZcq3w1WkVX09PhVQK4HTrFBRHit6BvogRMl2de5ByADCjjVCysNfmWf
+qLHor5+LM2KOTBoOptidG6r9bpKoJNKk0a4evmfXRRe79UoAqcbM3UWZBc48M2qv
+tNzRbAcf0S+ltgw7xEW8rge6Vcz9lLTDuBjC7Mg3m1Q2gQO94RgZifNrmVAF31cY
+iBRNeLQGCrOV/Vt8XhD5Un90dA2aKLrW90IG9houHfKNj3vpRU32Qbb9kLpk1MZY
+fDiLw4372qAC4NpSLRpCIBbT33VztUOTmZgIg4zJiQGSp89dEVN8OUT/yjKQps39
+9XwxzS2A4J/DXuYUkCUD0/FKn7OEf0beXVyOoQItucTGIePSGkIT79uG9qptpxZL
+G4kKPLx5+UhNtHsaNA==
+=ZoTi
+-END PGP PUBLIC KEY BLOCK-

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha closed pull request #9707: Bump version to 1.1.0

2018-02-06 Thread GitBox
szha closed pull request #9707: Bump version to 1.1.0
URL: https://github.com/apache/incubator-mxnet/pull/9707
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION
index 2996eed5db..0ec7f3667c 100644
--- a/R-package/DESCRIPTION
+++ b/R-package/DESCRIPTION
@@ -1,7 +1,7 @@
 Package: mxnet
 Type: Package
 Title: MXNet: A Flexible and Efficient Machine Learning Library for 
Heterogeneous Distributed Systems
-Version: 1.0.1
+Version: 1.1.0
 Date: 2017-06-27
 Author: Tianqi Chen, Qiang Kou, Tong He
 Maintainer: Qiang Kou 
diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index f482e4fc0b..d5a861852f 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -111,9 +111,9 @@
 /*! \brief major version */
 #define MXNET_MAJOR 1
 /*! \brief minor version */
-#define MXNET_MINOR 0
+#define MXNET_MINOR 1
 /*! \brief patch version */
-#define MXNET_PATCH 1
+#define MXNET_PATCH 0
 /*! \brief mxnet version */
 #define MXNET_VERSION (MXNET_MAJOR*1 + MXNET_MINOR*100 + MXNET_PATCH)
 /*! \brief helper for making version number */
diff --git a/python/mxnet/libinfo.py b/python/mxnet/libinfo.py
index 9ab0f5960a..8ccac29d5f 100644
--- a/python/mxnet/libinfo.py
+++ b/python/mxnet/libinfo.py
@@ -61,4 +61,4 @@ def find_lib_path():
 
 
 # current version
-__version__ = "1.0.1"
+__version__ = "1.1.0"
diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index 75f2d2cdcb..cbcd7acdaf 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-linux-x86_64-cpu
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   so
 
   
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 7c7162dbec..cfe22e7eea 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-linux-x86_64-gpu
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   so
 
   
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml 
b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index 0b5c4e20b4..7f7f1ab75c 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-osx-x86_64-cpu
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   jnilib
 
   
diff --git a/scala-package/assembly/pom.xml b/scala-package/assembly/pom.xml
index efa3b75b15..a755d7cb84 100644
--- a/scala-package/assembly/pom.xml
+++ b/scala-package/assembly/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
diff --git a/scala-package/core/pom.xml b/scala-package/core/pom.xml
index b7219064a5..0df7047505 100644
--- a/scala-package/core/pom.xml
+++ b/scala-package/core/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -71,13 +71,13 @@
 
   ml.dmlc.mxnet
   mxnet-init_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   provided
 
 
   ml.dmlc.mxnet
   mxnet-macros_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   provided
 
   
diff --git a/scala-package/examples/pom.xml b/scala-package/examples/pom.xml
index 87ce898472..a23b7b91f6 100644
--- a/scala-package/examples/pom.xml
+++ b/scala-package/examples/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -118,7 +118,7 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   provided
 
 
diff --git a/scala-package/init-native/linux-x86_64/pom.xml 
b/scala-package

[incubator-mxnet] branch master updated: Bump version to 1.1.0 (#9707)

2018-02-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 00a12f3  Bump version to 1.1.0 (#9707)
00a12f3 is described below

commit 00a12f340b8d0a3d683bb5e7ca075d3e58fdb2b9
Author: Haibin Lin 
AuthorDate: Tue Feb 6 09:26:35 2018 -0800

Bump version to 1.1.0 (#9707)

* Bump 1.1 (#192)

* bump

* also update base.h

* revert website changes

* Update index.html

* revert mxnet-theme/index.html changes

* remove empty line
---
 R-package/DESCRIPTION   | 2 +-
 include/mxnet/base.h| 4 ++--
 python/mxnet/libinfo.py | 2 +-
 scala-package/assembly/linux-x86_64-cpu/pom.xml | 6 +++---
 scala-package/assembly/linux-x86_64-gpu/pom.xml | 6 +++---
 scala-package/assembly/osx-x86_64-cpu/pom.xml   | 6 +++---
 scala-package/assembly/pom.xml  | 2 +-
 scala-package/core/pom.xml  | 6 +++---
 scala-package/examples/pom.xml  | 4 ++--
 scala-package/init-native/linux-x86_64/pom.xml  | 4 ++--
 scala-package/init-native/osx-x86_64/pom.xml| 4 ++--
 scala-package/init-native/pom.xml   | 2 +-
 scala-package/init/pom.xml  | 2 +-
 scala-package/macros/pom.xml| 6 +++---
 scala-package/native/linux-x86_64-cpu/pom.xml   | 4 ++--
 scala-package/native/linux-x86_64-gpu/pom.xml   | 4 ++--
 scala-package/native/osx-x86_64-cpu/pom.xml | 4 ++--
 scala-package/native/pom.xml| 2 +-
 scala-package/pom.xml   | 2 +-
 scala-package/spark/pom.xml | 4 ++--
 snapcraft.yaml  | 2 +-
 21 files changed, 39 insertions(+), 39 deletions(-)

diff --git a/R-package/DESCRIPTION b/R-package/DESCRIPTION
index 2996eed..0ec7f36 100644
--- a/R-package/DESCRIPTION
+++ b/R-package/DESCRIPTION
@@ -1,7 +1,7 @@
 Package: mxnet
 Type: Package
 Title: MXNet: A Flexible and Efficient Machine Learning Library for 
Heterogeneous Distributed Systems
-Version: 1.0.1
+Version: 1.1.0
 Date: 2017-06-27
 Author: Tianqi Chen, Qiang Kou, Tong He
 Maintainer: Qiang Kou 
diff --git a/include/mxnet/base.h b/include/mxnet/base.h
index f482e4f..d5a8618 100644
--- a/include/mxnet/base.h
+++ b/include/mxnet/base.h
@@ -111,9 +111,9 @@
 /*! \brief major version */
 #define MXNET_MAJOR 1
 /*! \brief minor version */
-#define MXNET_MINOR 0
+#define MXNET_MINOR 1
 /*! \brief patch version */
-#define MXNET_PATCH 1
+#define MXNET_PATCH 0
 /*! \brief mxnet version */
 #define MXNET_VERSION (MXNET_MAJOR*1 + MXNET_MINOR*100 + MXNET_PATCH)
 /*! \brief helper for making version number */
diff --git a/python/mxnet/libinfo.py b/python/mxnet/libinfo.py
index 9ab0f59..8ccac29 100644
--- a/python/mxnet/libinfo.py
+++ b/python/mxnet/libinfo.py
@@ -61,4 +61,4 @@ def find_lib_path():
 
 
 # current version
-__version__ = "1.0.1"
+__version__ = "1.1.0"
diff --git a/scala-package/assembly/linux-x86_64-cpu/pom.xml 
b/scala-package/assembly/linux-x86_64-cpu/pom.xml
index 75f2d2c..cbcd7ac 100644
--- a/scala-package/assembly/linux-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-linux-x86_64-cpu
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   so
 
   
diff --git a/scala-package/assembly/linux-x86_64-gpu/pom.xml 
b/scala-package/assembly/linux-x86_64-gpu/pom.xml
index 7c7162d..cfe22e7 100644
--- a/scala-package/assembly/linux-x86_64-gpu/pom.xml
+++ b/scala-package/assembly/linux-x86_64-gpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala-linux-x86_64-gpu
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
   so
 
   
diff --git a/scala-package/assembly/osx-x86_64-cpu/pom.xml 
b/scala-package/assembly/osx-x86_64-cpu/pom.xml
index 0b5c4e2..7f7f1ab 100644
--- a/scala-package/assembly/osx-x86_64-cpu/pom.xml
+++ b/scala-package/assembly/osx-x86_64-cpu/pom.xml
@@ -6,7 +6,7 @@
   
 ml.dmlc.mxnet
 mxnet-full-parent_2.11
-1.0.1-SNAPSHOT
+1.1.0-SNAPSHOT
 ../pom.xml
   
 
@@ -18,12 +18,12 @@
 
   ml.dmlc.mxnet
   mxnet-core_${scala.binary.version}
-  1.0.1-SNAPSHOT
+  1.1.0-SNAPSHOT
 
 
   ml.dmlc.mxnet
   libmxnet-scala

[incubator-mxnet] branch master updated: Update NOTICE (#9706)

2018-02-06 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new 74d5c82  Update NOTICE (#9706)
74d5c82 is described below

commit 74d5c828d767cb1d3157ce9f25f82b181dd1d29a
Author: Haibin Lin 
AuthorDate: Tue Feb 6 09:25:50 2018 -0800

Update NOTICE (#9706)
---
 NOTICE | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/NOTICE b/NOTICE
index a12b99f..98321cb 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache MXNET (incubating)
-Copyright 2017- The Apache Software Foundation
+Copyright 2017-2018 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).

-- 
To stop receiving notification emails like this one, please contact
zhash...@apache.org.


[GitHub] szha closed pull request #9706: Update years in NOTICE file to 2017-2018

2018-02-06 Thread GitBox
szha closed pull request #9706: Update years in NOTICE file to 2017-2018
URL: https://github.com/apache/incubator-mxnet/pull/9706
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/NOTICE b/NOTICE
index a12b99f5b5..98321cba7c 100644
--- a/NOTICE
+++ b/NOTICE
@@ -1,5 +1,5 @@
 Apache MXNET (incubating)
-Copyright 2017- The Apache Software Foundation
+Copyright 2017-2018 The Apache Software Foundation
 
 This product includes software developed at
 The Apache Software Foundation (http://www.apache.org/).


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] reminisce commented on issue #9700: Squeeze op

2018-02-06 Thread GitBox
reminisce commented on issue #9700: Squeeze op
URL: https://github.com/apache/incubator-mxnet/pull/9700#issuecomment-363496627
 
 
   @chinakook `expand_dims`?


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] eric-haibin-lin closed pull request #9698: Fixed 4 broken links

2018-02-06 Thread GitBox
eric-haibin-lin closed pull request #9698: Fixed 4 broken links
URL: https://github.com/apache/incubator-mxnet/pull/9698
 
 
   

This is a PR merged from a forked repository.
As GitHub hides the original diff on merge, it is displayed below for
the sake of provenance:

As this is a foreign pull request (from a fork), the diff is supplied
below (as it won't show otherwise due to GitHub magic):

diff --git a/docs/faq/finetune.md b/docs/faq/finetune.md
index 2c6c7e3402..533c3caf52 100644
--- a/docs/faq/finetune.md
+++ b/docs/faq/finetune.md
@@ -15,7 +15,7 @@ with these pretrained weights when training on our new task. 
This process is
 commonly called _fine-tuning_. There are a number of variations of fine-tuning.
 Sometimes, the initial neural network is used only as a _feature extractor_.
 That means that we freeze every layer prior to the output layer and simply 
learn
-a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb),
 we explained how to
+a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb),
 we explained how to
 do this kind of feature extraction. Another approach is to update all of
 the network's weights for the new task, and that's the approach we demonstrate 
in
 this document.
diff --git a/docs/faq/multi_devices.md b/docs/faq/multi_devices.md
index 5d538bca56..b9cb3ea291 100644
--- a/docs/faq/multi_devices.md
+++ b/docs/faq/multi_devices.md
@@ -210,4 +210,4 @@ export PS_VERBOSE=1; python ../../tools/launch.py ...
 ### More
 
 - See more launch options by `python ../../tools/launch.py -h`
-- See more options of 
[ps-lite](http://ps-lite.readthedocs.org/en/latest/faq.html)
+- See more options of [ps-lite](https://ps-lite.readthedocs.io/en/latest)
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index aca091c41c..3eff299d77 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -134,7 +134,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Imperative tensor operations on 
CPU/GPU](http://mxnet.incubator.apache.org/tutorials/basic/ndarray.html)
 
-- [NDArray 
Indexing](http://mxnet.incubator.apache.org/tutorials/basic/ndarray_indexing.html)
+- [NDArray Indexing](../tutorials/basic/ndarray_indexing.html)
 
 - [Symbol API](http://mxnet.incubator.apache.org/tutorials/basic/symbol.html)
 
@@ -174,7 +174,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 
 
-- [Connectionist Temporal 
Classification](http://mxnet.incubator.apache.org/tutorials/speech_recognition/ctc.html)
+- [Connectionist Temporal 
Classification](../tutorials/speech_recognition/ctc.html)
 
 - [Distributed key-value 
store](http://mxnet.incubator.apache.org/tutorials/python/kvstore.html)
 
diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 71c144f80c..c8822bb02c 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -16,7 +16,7 @@
 # under the License.
 
 # coding: utf-8
-# pylint: disable=
+# pylint: disable=line-too-long
 """Parameter optimizer."""
 __all__ = ['Trainer']
 
@@ -34,7 +34,7 @@ class Trainer(object):
 The set of parameters to optimize.
 optimizer : str or Optimizer
 The optimizer to use. See
-`help 
`_
+`help 
`_
 on Optimizer for a list of available optimizers.
 optimizer_params : dict
 Key-word arguments to be passed to optimizer constructor. For example,


 


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated: Fixed 4 broken links (#9698)

2018-02-06 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git


The following commit(s) were added to refs/heads/master by this push:
 new d056bfd  Fixed 4 broken links (#9698)
d056bfd is described below

commit d056bfd2d4c3947f7a04a91be205f761cea0f362
Author: thinksanky <31976455+thinksa...@users.noreply.github.com>
AuthorDate: Tue Feb 6 09:17:06 2018 -0800

Fixed 4 broken links (#9698)

* Fixed 4 broken links

* fixed pylint for long line disable and 1 broken link
---
 docs/faq/finetune.md  | 2 +-
 docs/faq/multi_devices.md | 2 +-
 docs/tutorials/index.md   | 4 ++--
 python/mxnet/gluon/trainer.py | 4 ++--
 4 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/docs/faq/finetune.md b/docs/faq/finetune.md
index 2c6c7e3..533c3ca 100644
--- a/docs/faq/finetune.md
+++ b/docs/faq/finetune.md
@@ -15,7 +15,7 @@ with these pretrained weights when training on our new task. 
This process is
 commonly called _fine-tuning_. There are a number of variations of fine-tuning.
 Sometimes, the initial neural network is used only as a _feature extractor_.
 That means that we freeze every layer prior to the output layer and simply 
learn
-a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/faq/predict.ipynb),
 we explained how to
+a new output layer. In [another 
document](https://github.com/dmlc/mxnet-notebooks/blob/master/python/how_to/predict.ipynb),
 we explained how to
 do this kind of feature extraction. Another approach is to update all of
 the network's weights for the new task, and that's the approach we demonstrate 
in
 this document.
diff --git a/docs/faq/multi_devices.md b/docs/faq/multi_devices.md
index 5d538bc..b9cb3ea 100644
--- a/docs/faq/multi_devices.md
+++ b/docs/faq/multi_devices.md
@@ -210,4 +210,4 @@ export PS_VERBOSE=1; python ../../tools/launch.py ...
 ### More
 
 - See more launch options by `python ../../tools/launch.py -h`
-- See more options of 
[ps-lite](http://ps-lite.readthedocs.org/en/latest/faq.html)
+- See more options of [ps-lite](https://ps-lite.readthedocs.io/en/latest)
diff --git a/docs/tutorials/index.md b/docs/tutorials/index.md
index aca091c..3eff299 100644
--- a/docs/tutorials/index.md
+++ b/docs/tutorials/index.md
@@ -134,7 +134,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 - [Imperative tensor operations on 
CPU/GPU](http://mxnet.incubator.apache.org/tutorials/basic/ndarray.html)
 
-- [NDArray 
Indexing](http://mxnet.incubator.apache.org/tutorials/basic/ndarray_indexing.html)
+- [NDArray Indexing](../tutorials/basic/ndarray_indexing.html)
 
 - [Symbol API](http://mxnet.incubator.apache.org/tutorials/basic/symbol.html)
 
@@ -174,7 +174,7 @@ The Gluon and Module tutorials are in Python, but you can 
also find a variety of
 
 
 
-- [Connectionist Temporal 
Classification](http://mxnet.incubator.apache.org/tutorials/speech_recognition/ctc.html)
+- [Connectionist Temporal 
Classification](../tutorials/speech_recognition/ctc.html)
 
 - [Distributed key-value 
store](http://mxnet.incubator.apache.org/tutorials/python/kvstore.html)
 
diff --git a/python/mxnet/gluon/trainer.py b/python/mxnet/gluon/trainer.py
index 71c144f..c8822bb 100644
--- a/python/mxnet/gluon/trainer.py
+++ b/python/mxnet/gluon/trainer.py
@@ -16,7 +16,7 @@
 # under the License.
 
 # coding: utf-8
-# pylint: disable=
+# pylint: disable=line-too-long
 """Parameter optimizer."""
 __all__ = ['Trainer']
 
@@ -34,7 +34,7 @@ class Trainer(object):
 The set of parameters to optimize.
 optimizer : str or Optimizer
 The optimizer to use. See
-`help 
`_
+`help 
`_
 on Optimizer for a list of available optimizers.
 optimizer_params : dict
 Key-word arguments to be passed to optimizer constructor. For example,

-- 
To stop receiving notification emails like this one, please contact
hai...@apache.org.


[GitHub] larroy commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2018-02-06 Thread GitBox
larroy commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-363495712
 
 
   Please reopen


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2018-02-06 Thread GitBox
larroy commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-363495288
 
 
   This is failing in Raspberry PI on v1.0.0
   
   ```
   In [1]: import mxnet as mx
   
   In [2]: a = np.array([47.844944], dtype=np.float32)
   ---
   NameError Traceback (most recent call last)
in ()
   > 1 a = np.array([47.844944], dtype=np.float32)
   
   NameError: name 'np' is not defined
   
   In [3]: import numpy as np
   
   In [4]: a = np.array([47.844944], dtype=np.float32)
   
   In [5]: a
   Out[5]: array([ 47.844944], dtype=float32)
   
   In [6]: b = mx.nd.zeros(1, dtype=np.float32)
   
   In [7]: b[0] = a
   
   In [8]: b
   Out[8]:
   
   [ 47.844944]
   
   
   In [9]: a
   Out[9]: array([ 47.844944], dtype=float32)
   
   In [10]: same(a, b.asnumpy())
   ---
   NameError Traceback (most recent call last)
in ()
   > 1 same(a, b.asnumpy())
   
   NameError: name 'same' is not defined
   
   In [11]: from mxnet.test_utils import same
   
   In [12]: same(a, b.asnumpy())
   Out[12]: True
   
   In [13]: b
   Out[13]:
   
   [ 47.844944]
   
   
   In [14]: b[0] = a[0]
   ---
   AttributeErrorTraceback (most recent call last)
in ()
   > 1 b[0] = a[0]
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in __setitem__(self, key, value)
   435 indexing_dispatch_code = _get_indexing_dispatch_code(key)
   436 if indexing_dispatch_code == _NDARRAY_BASIC_INDEXING:
   --> 437 self._set_nd_basic_indexing(key, value)
   438 elif indexing_dispatch_code == _NDARRAY_ADVANCED_INDEXING:
   439 self._set_nd_advanced_indexing(key, value)
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in _set_nd_basic_indexing(self, key, 
value)
   677 if isinstance(key, integer_types):
   678 sliced_arr = self._at(key)
   --> 679 sliced_arr[:] = value
   680 return
   681 elif isinstance(key, py_slice):
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in __setitem__(self, key, value)
   435 indexing_dispatch_code = _get_indexing_dispatch_code(key)
   436 if indexing_dispatch_code == _NDARRAY_BASIC_INDEXING:
   --> 437 self._set_nd_basic_indexing(key, value)
   438 elif indexing_dispatch_code == _NDARRAY_ADVANCED_INDEXING:
   439 self._set_nd_advanced_indexing(key, value)
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in _set_nd_basic_indexing(self, key, 
value)
   691 value.copyto(self)
   692 elif isinstance(value, numeric_types):
   --> 693 _internal._full(shape=shape, ctx=self.context,
   694 dtype=self.dtype, 
value=float(value), out=self)
   695 elif isinstance(value, (np.ndarray, np.generic)):
   
   AttributeError: module 'mxnet.ndarray._internal' has no attribute '_full'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] larroy commented on issue #8668: Precision error setting NDArray from np.float32 scalar

2018-02-06 Thread GitBox
larroy commented on issue #8668: Precision error setting NDArray from 
np.float32 scalar
URL: 
https://github.com/apache/incubator-mxnet/issues/8668#issuecomment-363495288
 
 
   This is failing in Raspberry PI
   
   ```
   In [1]: import mxnet as mx
   
   In [2]: a = np.array([47.844944], dtype=np.float32)
   ---
   NameError Traceback (most recent call last)
in ()
   > 1 a = np.array([47.844944], dtype=np.float32)
   
   NameError: name 'np' is not defined
   
   In [3]: import numpy as np
   
   In [4]: a = np.array([47.844944], dtype=np.float32)
   
   In [5]: a
   Out[5]: array([ 47.844944], dtype=float32)
   
   In [6]: b = mx.nd.zeros(1, dtype=np.float32)
   
   In [7]: b[0] = a
   
   In [8]: b
   Out[8]:
   
   [ 47.844944]
   
   
   In [9]: a
   Out[9]: array([ 47.844944], dtype=float32)
   
   In [10]: same(a, b.asnumpy())
   ---
   NameError Traceback (most recent call last)
in ()
   > 1 same(a, b.asnumpy())
   
   NameError: name 'same' is not defined
   
   In [11]: from mxnet.test_utils import same
   
   In [12]: same(a, b.asnumpy())
   Out[12]: True
   
   In [13]: b
   Out[13]:
   
   [ 47.844944]
   
   
   In [14]: b[0] = a[0]
   ---
   AttributeErrorTraceback (most recent call last)
in ()
   > 1 b[0] = a[0]
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in __setitem__(self, key, value)
   435 indexing_dispatch_code = _get_indexing_dispatch_code(key)
   436 if indexing_dispatch_code == _NDARRAY_BASIC_INDEXING:
   --> 437 self._set_nd_basic_indexing(key, value)
   438 elif indexing_dispatch_code == _NDARRAY_ADVANCED_INDEXING:
   439 self._set_nd_advanced_indexing(key, value)
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in _set_nd_basic_indexing(self, key, 
value)
   677 if isinstance(key, integer_types):
   678 sliced_arr = self._at(key)
   --> 679 sliced_arr[:] = value
   680 return
   681 elif isinstance(key, py_slice):
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in __setitem__(self, key, value)
   435 indexing_dispatch_code = _get_indexing_dispatch_code(key)
   436 if indexing_dispatch_code == _NDARRAY_BASIC_INDEXING:
   --> 437 self._set_nd_basic_indexing(key, value)
   438 elif indexing_dispatch_code == _NDARRAY_ADVANCED_INDEXING:
   439 self._set_nd_advanced_indexing(key, value)
   
   ~/mxnet/python/mxnet/ndarray/ndarray.py in _set_nd_basic_indexing(self, key, 
value)
   691 value.copyto(self)
   692 elif isinstance(value, numeric_types):
   --> 693 _internal._full(shape=shape, ctx=self.context,
   694 dtype=self.dtype, 
value=float(value), out=self)
   695 elif isinstance(value, (np.ndarray, np.generic)):
   
   AttributeError: module 'mxnet.ndarray._internal' has no attribute '_full'
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] nazikus opened a new issue #9715: IndexError in labels when size of training dataset is not multiple of batch size

2018-02-06 Thread GitBox
nazikus opened a new issue #9715: IndexError in labels when size of training 
dataset is not multiple of batch size
URL: https://github.com/apache/incubator-mxnet/issues/9715
 
 
   ## Description
   
   IndexError is thrown during training if training dataset size if not 
multiple of batch size.
   
   ## Long description
   
   I have a simple transfer learning algorithm based on [mxnet 
tutorial](https://mxnet.incubator.apache.org/how_to/finetune.html).
   However, I had a nasty exception during training that occurs every time on 
the last batch of the first epoch. Empirically I have figure out if you 
manually ensure that dataset size (number of records in .lst file) is multiple 
of batch size,  than this exception is gone.
   
   **Is it expected behavior, or is proper validation missing?**
   
   
   ## Environment info (Required)
   
   python 2.7.12
   mxnet-cu91mkl 1.0.0.post4
   CUDA 9.1.85
   
   
   ## Error Message:
   
   IndexError example during training:
   
   2018-02-06 16:24:39,605 - Epoch[1] Batch [200]   Speed: 37.22 
samples/seccross-entropy=3.293303  accuracy=0.192188
   2018-02-06 16:25:13,382 - Epoch[1] Batch [220]   Speed: 37.90 
samples/seccross-entropy=3.286735  accuracy=0.190625
   Traceback (most recent call last):
 File "train_multilabel.py", line 204, in 
   train_model(args)
 File "train_multilabel.py", line 135, in train_model
   num_epoch  = params.num_epoch,
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/module/base_module.py",
 line 496, in fit
   self.update_metric(eval_metric, data_batch.label)
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/module/module.py",
 line 749, in update_metric
   self._exec_group.update_metric(eval_metric, labels)
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/module/executor_group.py",
 line 616, in update_metric
   eval_metric.update_dict(labels_, preds)
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/metric.py",
 line 281, in update_dict
   metric.update_dict(labels, preds)
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/metric.py",
 line 109, in update_dict
   self.update(label, pred)
 File 
"/home/otkach/.virtualenvs/mxnet-cu91mkl/local/lib/python2.7/site-packages/mxnet/metric.py",
 line 924, in update
   prob = pred[numpy.arange(label.shape[0]), numpy.int64(label)]
   IndexError: index -9223372036854775808 is out of bounds for axis 1 with 
size 52
   
   
   
   ## Steps to reproduce
   
   simply create ImageIter object with dataset size (number of lines in .lst 
file) not divisible by batch size, and iterate over it, print label values:
   
   kv = mx.kvstore.create('device')
   
   train_data = mx.image.ImageIter(
   batch_size   = 64,
   data_shape   = (3, 224, 224),
   path_imglist = 'dataset/train_data.lst',
   path_root= 'dataset/images/,
   part_index   = kv.rank,
   num_parts= kv.num_workers,
   shuffle  = True,
   data_name= 'data',
   label_name   = 'softmax_label',
   )
   
   for i, batch in enumerate(train_data):
   print("batch idx {:3d}\n{}\n".format(i, 
batch.label[0].asnumpy().tolist()))
   
   For example, dataset size 15226, batch size 64, number of batches 238 
(rounded). 
   
   In my case, some remaining label values in the last batch are non-integers:
   
   ...
   batch index   236
   [24.0, 13.0, 29.0, 21.0, 48.0, 44.0, 22.0, 47.0, 36.0, 9.0, 47.0, 43.0, 
33.0, 3.0, 25.0, 34.0, 47.0, 1.0, 34.0, 40.0, 11.0, 10.0, 43.0, 3.0, 43.0, 
27.0, 39.0, 39.0, 13.0, 48.0, 28.0, 42.0, 24.0, 39.0, 31.0, 45.0, 51.0, 6.0, 
1.0, 48.0, 17.0, 42.0, 23.0, 9.0, 27.0, 39.0, 19.0, 36.0, 14.0, 10.0, 26.0, 
37.0, 42.0, 7.0, 47.0, 29.0, 37.0, 6.0, 9.0, 9.0, 39.0, 5.0, 11.0, 22.0]
   batch index   237
   [5.0, 50.0, 11.0, 12.0, 50.0, 10.0, 35.0, 31.0, 11.0, 13.0, 2.0, 26.0, 
51.0, 6.0, 48.0, 37.0, 25.0, 24.0, 14.0, 20.0, 44.0, 40.0, 21.0, 45.0, 23.0, 
18.0, 10.0, 15.0, 21.0, 7.0, 33.0, 32.0, 50.0, 44.0, 10.0, 22.0, 7.0, 9.0, 3.0, 
7.0, 49.0, 47.0, 49.0, 26.0, 0.0, 23.0, 2.0, 1.0, 27.0, 0.0, 13.0, 18.0, 38.0, 
27.0, 50.0, 18.0, 46.0, 5.0, 7.315163621112927e-37, 0.0, 7.315152859140721e-37, 
0.0, 7.315142097168515e-37, 0.0]
   
   
   ## What have you tried to solve it?
   manually remove several lines from `trainining_data.lst` file, so total 
number of training dataset size (eg 15168) is divisible by batch size (64).
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the 

[GitHub] wangshuailong commented on issue #9712: two consecutive backward cause error Check failed: type_ != nullptr The any container is empty requested

2018-02-06 Thread GitBox
wangshuailong commented on issue #9712: two consecutive backward cause error 
Check failed: type_ != nullptr The any container is empty requested
URL: 
https://github.com/apache/incubator-mxnet/issues/9712#issuecomment-363447490
 
 
   I solved this by write two separate part like this: 
   ```
   with autograd.record():
   loss = XXX
   loss.backward()
   
   with autograd.record():
   loss = XXX
   loss.backward()
   ```
   close this issue for now.  : )


This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >