[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17483: Tests failed when I try to build scala-package from source
ChaiBapchya commented on issue #17483: Tests failed when I try to build scala-package from source URL: https://github.com/apache/incubator-mxnet/issues/17483#issuecomment-592375627 @zachgk @lanking520 gentle reminder This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #15925: [CI] illegal memory access
ChaiBapchya commented on issue #15925: [CI] illegal memory access URL: https://github.com/apache/incubator-mxnet/issues/15925#issuecomment-592369423 G4 instance with cuda10.0 that is? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 18d526d Bump the publish timestamp. 18d526d is described below commit 18d526d7ee81e016fd0e50dea347cebbd79d66a2 Author: mxnet-ci AuthorDate: Fri Feb 28 06:40:30 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..40d456b --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Fri Feb 28 06:40:30 UTC 2020
[GitHub] [incubator-mxnet] sxjscience commented on issue #17665: No speedup from using FP16 (4 times slower than PyTorch)
sxjscience commented on issue #17665: No speedup from using FP16 (4 times slower than PyTorch) URL: https://github.com/apache/incubator-mxnet/issues/17665#issuecomment-592326750 @ptrendx Do you have any idea? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] Tommliu opened a new pull request #17717: [Numpy] FFI Bincount
Tommliu opened a new pull request #17717: [Numpy] FFI Bincount URL: https://github.com/apache/incubator-mxnet/pull/17717 ## Description ## Change Operator Registration of Bincount to FFI ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17713: test_operator_gpu.test_embedding_with_type 'an illegal memory access was encountered'
leezu commented on issue #17713: test_operator_gpu.test_embedding_with_type 'an illegal memory access was encountered' URL: https://github.com/apache/incubator-mxnet/issues/17713#issuecomment-592308134 May be a bug in Cuda 10.0. Can't reproduce on 10.1. However, https://docs.nvidia.com/cuda/archive/10.1/cuda-toolkit-release-notes/index.html doesn't seem to list a related fix. So maybe it's nevertheless a bug in MXNet. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zixuanweeei commented on issue #17702: Support projection feature for LSTM on CPU (Only Inference)
zixuanweeei commented on issue #17702: Support projection feature for LSTM on CPU (Only Inference) URL: https://github.com/apache/incubator-mxnet/pull/17702#issuecomment-592304640 CI has passed last time. The latest commit just added some documents for the projection feature. Accordingly, it should have no impact on functionality. Let's wait for CI validation. Please take a review. Thanks. @ciyongch @pengzhao-intel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference)
zixuanweeei commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference) URL: https://github.com/apache/incubator-mxnet/pull/17702#discussion_r385498649 ## File path: src/operator/rnn.cc ## @@ -385,7 +382,9 @@ The definition of GRU here is slightly different from paper but compatible with }) Review comment: Done. Please take a review again. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated (0e6ab21 -> 1af06d9)
This is an automated email from the ASF dual-hosted git repository. haoj pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 0e6ab21 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation add 1af06d9 refactor sample_n (#17618) No new revisions were added by this update. Summary of changes: python/mxnet/ndarray/numpy_extension/random.py | 40 - python/mxnet/symbol/numpy_extension/random.py | 40 - src/operator/numpy/random/dist_common.h| 49 -- src/operator/numpy/random/np_bernoulli_op.cc | 2 +- tests/python/unittest/test_numpy_op.py | 3 +- 5 files changed, 88 insertions(+), 46 deletions(-)
[GitHub] [incubator-mxnet] haojin2 merged pull request #17618: [Numpy] Rewrite sample_n
haojin2 merged pull request #17618: [Numpy] Rewrite sample_n URL: https://github.com/apache/incubator-mxnet/pull/17618 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716#issuecomment-592291579 @mxnet-label-bot add [MKL] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins removed a comment on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins removed a comment on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716#issuecomment-592290187 @mxnet-label-bot remove [MKL] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins removed a comment on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins removed a comment on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716#issuecomment-592290007 @mxnet-label-bot add [MKL] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716#issuecomment-592290187 @mxnet-label-bot remove [MKL] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins commented on issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716#issuecomment-592290007 @mxnet-label-bot add [MKL] This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins opened a new issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32
connorgoggins opened a new issue #17716: [Large Tensor] linalg ops fail w/input dim >= 2**32 URL: https://github.com/apache/incubator-mxnet/issues/17716 ## Description While testing the `linalg_*` ops on large tensor (dimension >= 2**32) data, I found that all of these ops fail with a segmentation fault on large tensor data. In a test run with the `linalg_det` op, I traced the error to line 485 of `src/operator/tensor/la_op-inl.h`. This line lies within the `Map` void function which takes in several parameters, namely an `int` i, an `int` N, and an `int*` pivot. The error is thrown in the iteration portion of the function, where an `int` j is incremented up to the value of `int` N. This error occurred irrespective of which BLAS engine MXNet was built with (MKL or OpenBLAS). ## Environment ``` --Python Info-- Version : 3.6.6 Compiler : GCC 7.2.0 Build: ('default', 'Jun 28 2018 17:14:51') Arch : ('64bit', '') Pip Info--- Version : 19.3.1 Directory: /home/ubuntu/anaconda3/lib/python3.6/site-packages/pip --MXNet Info--- Version : 1.6.0 Directory: /home/ubuntu/forked-mxnet/python/mxnet Num GPUs : 0 Hashtag not found. Not installed from pre-built package. --System Info-- Platform : Linux-4.4.0-1102-aws-x86_64-with-debian-stretch-sid system : Linux node : ip-172-31-41-238 release : 4.4.0-1102-aws version : #113-Ubuntu SMP Wed Jan 29 14:54:54 UTC 2020 --Hardware Info-- machine : x86_64 processor: x86_64 Architecture: x86_64 CPU op-mode(s):32-bit, 64-bit Byte Order:Little Endian CPU(s):96 On-line CPU(s) list: 0-95 Thread(s) per core:2 Core(s) per socket:24 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family:6 Model: 85 Model name:Intel(R) Xeon(R) Platinum 8259CL CPU @ 2.50GHz Stepping: 7 CPU MHz: 2499.998 BogoMIPS: 4999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache: 1024K L3 cache: 36608K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm $ onstant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe $ opcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single kaiser fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms in vpcid mpx avx512f rdseed adx smap clflushopt clwb avx512cd xsaveopt xsavec xgetbv1 ida arat pku ``` ### MXNet build flags BLAS = MKL ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✖ BLAS_OPEN, ✖ BLAS_ATLAS, ✔ BLAS_MKL, ✖ BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, ✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP BLAS = OpenBLAS ✖ CUDA, ✖ CUDNN, ✖ NCCL, ✖ CUDA_RTC, ✖ TENSORRT, ✔ CPU_SSE, ✔ CPU_SSE2, ✔ CPU_SSE3, ✔ CPU_SSE4_1, ✔ CPU_SSE4_2, ✖ CPU_SSE4A, ✔ CPU_AVX, ✖ CPU_AVX2, ✔ OPENMP, ✖ SSE, ✔ F16C, ✖ JEMALLOC, ✔ BLAS_OPEN, ✖ BLAS_ATLAS, ✖ BLAS_MKL, ✖ BLAS_APPLE, ✔ LAPACK, ✖ MKLDNN, ✖ OPENCV, ✖ CAFFE, ✖ PROFILER, ✖ DIST_KVSTORE, ✖ CXX14, ✔ INT64_TENSOR_SIZE, ✖ SIGNAL_HANDLER, ✔ DEBUG, ✖ TVM_OP ## Steps to reproduce ### Script Create a Python script with the following content: ``` from mxnet import nd print(nd.linalg_det(A=nd.random_normal(shape=(2**16, 2**16 ``` and run it with Python3. ### Error With both BLAS engines, the error is the same: ``` Segmentation fault (core dumped) ``` ## Additional Information The `linalg` ops do not throw errors on data with dimension <= 2**32. See the following example script and output: ### Script ``` from mxnet import nd print(nd.linalg_det(A=nd.random_normal(shape=(2**15, 2**15 ``` ### Output ``` [inf] ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] TaoLv edited a comment on issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod
TaoLv edited a comment on issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod URL: https://github.com/apache/incubator-mxnet/issues/17715#issuecomment-592278212 Hmm if you're using the latest nightly build, I'm afraid MKL-DNN is already enabled in it. Could you please share the output of mx.runtime.Features()? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] xinyu-intel commented on issue #17679: [MKL-DNN] BatchNormRelu Fusion
xinyu-intel commented on issue #17679: [MKL-DNN] BatchNormRelu Fusion URL: https://github.com/apache/incubator-mxnet/pull/17679#issuecomment-592278870 @zhreshold @hetong007 Can you help take a review? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] TaoLv commented on issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod
TaoLv commented on issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod URL: https://github.com/apache/incubator-mxnet/issues/17715#issuecomment-592278212 Hmm if you're using the latest nightly build, I'm afraid MKL-DNN is already enabled in it. Could please share the output of mx.runtime.Features()? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest opened a new issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod
apeforest opened a new issue #17715: MXNet nightly cpu build (w/o MKLDNN) does not work with Horovod URL: https://github.com/apache/incubator-mxnet/issues/17715 ## Description Installing Horovod with MXNet nightly CPU (w/o MKLDNN) failed. GPU build and CPU mkldnn build are both okay. ## Error Message: OSError: /usr/local/lib/python3.6/dist-packages/horovod/mxnet/mpi_lib.cpython-36m-x86_64-linux-gnu.so: undefined symbol: _ZN5mxnet10CopyFromToERKNS_7NDArrayEPS1_i @TaoLv Is it also related to the change you made recently that fixed MKLDNN build? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] zixuanweeei commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference)
zixuanweeei commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference) URL: https://github.com/apache/incubator-mxnet/pull/17702#discussion_r385467427 ## File path: src/operator/rnn.cc ## @@ -385,7 +382,9 @@ The definition of GRU here is slightly different from paper but compatible with }) Review comment: Sure. Thanks for pointing out that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch leezu-patch-1 updated (cd562c4 -> 3f1f22c)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. discard cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile new 3f1f22c Fix lapack detection on Ubuntu 18.04 in Makefile This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (cd562c4) \ N -- N -- N refs/heads/leezu-patch-1 (3f1f22c) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: Makefile | 2 ++ 1 file changed, 2 insertions(+)
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit 3f1f22cea36d49ec58a7f6d02fdc70b544713222 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 4 1 file changed, 4 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..90303ae 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so)) @@ -240,6 +242,8 @@ endif endif endif endif +endif +endif # lapack settings. ifeq ($(USE_LAPACK), 1)
[GitHub] [incubator-mxnet] leezu opened a new pull request #17714: Fix lapack detection on Ubuntu 18.04 in Makefile
leezu opened a new pull request #17714: Fix lapack detection on Ubuntu 18.04 in Makefile URL: https://github.com/apache/incubator-mxnet/pull/17714 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit 3f1f22cea36d49ec58a7f6d02fdc70b544713222 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 4 1 file changed, 4 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..90303ae 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so)) @@ -240,6 +242,8 @@ endif endif endif endif +endif +endif # lapack settings. ifeq ($(USE_LAPACK), 1)
[incubator-mxnet] branch leezu-patch-1 updated (cd562c4 -> 3f1f22c)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. discard cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile new 3f1f22c Fix lapack detection on Ubuntu 18.04 in Makefile This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (cd562c4) \ N -- N -- N refs/heads/leezu-patch-1 (3f1f22c) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: Makefile | 2 ++ 1 file changed, 2 insertions(+)
[incubator-mxnet] branch leezu-patch-1 updated (cd562c4 -> 3f1f22c)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. discard cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile new 3f1f22c Fix lapack detection on Ubuntu 18.04 in Makefile This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (cd562c4) \ N -- N -- N refs/heads/leezu-patch-1 (3f1f22c) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: Makefile | 2 ++ 1 file changed, 2 insertions(+)
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit 3f1f22cea36d49ec58a7f6d02fdc70b544713222 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 4 1 file changed, 4 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..90303ae 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so)) @@ -240,6 +242,8 @@ endif endif endif endif +endif +endif # lapack settings. ifeq ($(USE_LAPACK), 1)
[incubator-mxnet] branch leezu-patch-1 created (now cd562c4)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. at cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile This branch includes the following new commits: new cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit cd562c44a7332838f80a3a05e44768f29c4e2348 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..36ebe70 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so))
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit 3f1f22cea36d49ec58a7f6d02fdc70b544713222 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 4 1 file changed, 4 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..90303ae 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so)) @@ -240,6 +242,8 @@ endif endif endif endif +endif +endif # lapack settings. ifeq ($(USE_LAPACK), 1)
[incubator-mxnet] branch leezu-patch-1 updated (cd562c4 -> 3f1f22c)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. discard cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile new 3f1f22c Fix lapack detection on Ubuntu 18.04 in Makefile This update added new revisions after undoing existing revisions. That is to say, some revisions that were in the old version of the branch are not in the new version. This situation occurs when a user --force pushes a change and generates a repository containing something like this: * -- * -- B -- O -- O -- O (cd562c4) \ N -- N -- N refs/heads/leezu-patch-1 (3f1f22c) You should already have received notification emails for all of the O revisions, and so the following emails describe only the N revisions from the common base, B. Any revisions marked "omit" are not gone; other references still refer to them. Any revisions marked "discard" are gone forever. The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference. Summary of changes: Makefile | 2 ++ 1 file changed, 2 insertions(+)
[incubator-mxnet] branch leezu-patch-1 created (now cd562c4)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. at cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile This branch includes the following new commits: new cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit cd562c44a7332838f80a3a05e44768f29c4e2348 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..36ebe70 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so))
[incubator-mxnet] branch leezu-patch-1 created (now cd562c4)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. at cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile This branch includes the following new commits: new cd562c4 Fix lapack detection on Ubuntu 18.04 in Makefile The 1 revisions listed above as "new" are entirely new to this repository and will be described in separate emails. The revisions listed as "add" were already present in the repository and have only been added to this reference.
[incubator-mxnet] 01/01: Fix lapack detection on Ubuntu 18.04 in Makefile
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch leezu-patch-1 in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git commit cd562c44a7332838f80a3a05e44768f29c4e2348 Author: Leonard Lausen AuthorDate: Thu Feb 27 17:07:29 2020 -0800 Fix lapack detection on Ubuntu 18.04 in Makefile --- Makefile | 2 ++ 1 file changed, 2 insertions(+) diff --git a/Makefile b/Makefile index 8c478d6..36ebe70 100644 --- a/Makefile +++ b/Makefile @@ -223,6 +223,8 @@ ifeq (,$(wildcard /lib/liblapack.a)) ifeq (,$(wildcard /lib/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.a)) ifeq (,$(wildcard /usr/lib/liblapack.so)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.a)) +ifeq (,$(wildcard /usr/lib/x86_64-linux-gnu/liblapack.so)) ifeq (,$(wildcard /usr/lib/liblapack.dylib)) ifeq (,$(wildcard /usr/lib64/liblapack.a)) ifeq (,$(wildcard /usr/lib64/liblapack.so))
[incubator-mxnet] branch master updated (55e6987 -> 0e6ab21)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 55e6987 cmake: remove -mf16c flag for android build (#17523) add 0e6ab21 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation No new revisions were added by this update. Summary of changes: cpp-package/.gitignore | 2 ++ cpp-package/include/mxnet-cpp/.gitignore | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 cpp-package/.gitignore delete mode 100644 cpp-package/include/mxnet-cpp/.gitignore
[incubator-mxnet] branch master updated (55e6987 -> 0e6ab21)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 55e6987 cmake: remove -mf16c flag for android build (#17523) add 0e6ab21 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation No new revisions were added by this update. Summary of changes: cpp-package/.gitignore | 2 ++ cpp-package/include/mxnet-cpp/.gitignore | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 cpp-package/.gitignore delete mode 100644 cpp-package/include/mxnet-cpp/.gitignore
[incubator-mxnet] branch master updated (55e6987 -> 0e6ab21)
This is an automated email from the ASF dual-hosted git repository. lausen pushed a change to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git. from 55e6987 cmake: remove -mf16c flag for android build (#17523) add 0e6ab21 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation No new revisions were added by this update. Summary of changes: cpp-package/.gitignore | 2 ++ cpp-package/include/mxnet-cpp/.gitignore | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) create mode 100644 cpp-package/.gitignore delete mode 100644 cpp-package/include/mxnet-cpp/.gitignore
[incubator-mxnet] branch master updated: Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation
This is an automated email from the ASF dual-hosted git repository. lausen pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new 0e6ab21 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation 0e6ab21 is described below commit 0e6ab21d2bc96e1ae4e158d4958a65fae74fa1ee Author: Gustavo Alvarez <462213+sl1pk...@users.noreply.github.com> AuthorDate: Fri Feb 28 01:48:48 2020 +0100 Move cpp-package/include/mxnet-cpp/.gitignore to avoid copying it on installation --- cpp-package/.gitignore | 2 ++ cpp-package/include/mxnet-cpp/.gitignore | 2 -- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/cpp-package/.gitignore b/cpp-package/.gitignore new file mode 100644 index 000..51453c9 --- /dev/null +++ b/cpp-package/.gitignore @@ -0,0 +1,2 @@ +# Rebuildable file(s) +include/mxnet-cpp/op.h diff --git a/cpp-package/include/mxnet-cpp/.gitignore b/cpp-package/include/mxnet-cpp/.gitignore deleted file mode 100644 index 995efdd..000 --- a/cpp-package/include/mxnet-cpp/.gitignore +++ /dev/null @@ -1,2 +0,0 @@ -# Rebuildable file(s) -op.h
[GitHub] [incubator-mxnet] leezu closed issue #17704: cpp-package: .gitignore in wrong path
leezu closed issue #17704: cpp-package: .gitignore in wrong path URL: https://github.com/apache/incubator-mxnet/issues/17704 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu merged pull request #17709: Fix #17704: move the .gitignore file to rigth path
leezu merged pull request #17709: Fix #17704: move the .gitignore file to rigth path URL: https://github.com/apache/incubator-mxnet/pull/17709 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 552825e Bump the publish timestamp. 552825e is described below commit 552825e9243b401975903a2992bd4ffbbdd55db6 Author: mxnet-ci AuthorDate: Fri Feb 28 00:40:34 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..3ad2c85 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Fri Feb 28 00:40:34 UTC 2020
[GitHub] [incubator-mxnet] eric-haibin-lin commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference)
eric-haibin-lin commented on a change in pull request #17702: Support projection feature for LSTM on CPU (Only Inference) URL: https://github.com/apache/incubator-mxnet/pull/17702#discussion_r385442120 ## File path: src/operator/rnn.cc ## @@ -385,7 +382,9 @@ The definition of GRU here is slightly different from paper but compatible with }) Review comment: I don't think the projection support is clear in the documentation. Could you update the documentation with LSTMP support when projection_size is set? You can refer to https://github.com/apache/incubator-mxnet/blob/62a85f365b819829fedb60116f803e0c0a3c554c/python/mxnet/gluon/contrib/rnn/rnn_cell.py#L197 Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0
ptrendx commented on issue #17658: Update website, README and NEWS with 1.6.0 URL: https://github.com/apache/incubator-mxnet/pull/17658#issuecomment-592233655 Ok, I believe it should be ready for review. I updated: - README, - NEWS.md (including fixing multiple not working links there) - Getting started webpage @szha @aaronmarkham @ThomasDelteil @roywei Please review This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #15925: [CI] illegal memory access
leezu commented on issue #15925: [CI] illegal memory access URL: https://github.com/apache/incubator-mxnet/issues/15925#issuecomment-592233431 Could the CI issue be related to https://github.com/apache/incubator-mxnet/issues/17713 ? That can be reproduced deterministically on G4 instance This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu opened a new issue #17713: test_operator_gpu.test_embedding_with_type 'an illegal memory access was encountered'
leezu opened a new issue #17713: test_operator_gpu.test_embedding_with_type 'an illegal memory access was encountered' URL: https://github.com/apache/incubator-mxnet/issues/17713 ## Description Embedding operator in `test_operator_gpu.test_embedding_with_type` triggers illegal memory access error deterministically on G4 instance. ### Error Message ``` % nosetests --verbose --stop ../tests/python/gpu/test_operator_gpu.py -m test_embedding_with_type /home/ubuntu/src/mxnet-master/tests/python/gpu/test_operator_gpu.py:2402: SyntaxWarning: "is" with a literal. Did you mean "=="? if req_dict['data'] is 'write': /home/ubuntu/src/mxnet-master/tests/python/gpu/test_operator_gpu.py:2404: SyntaxWarning: "is" with a literal. Did you mean "=="? if req_dict['grid'] is 'write': /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_operator.py:3738: SyntaxWarning: "is" with a literal. Did you mean "=="? assert_almost_equal(exe.outputs[0], np_out, rtol=1e-2 if dtype is 'float16' else 1e-5, atol=1e-5) /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_operator.py:3897: SyntaxWarning: "is" with a literal. Did you mean "=="? npy_out = l1norm(in_data, i) if order is 1 else l2norm(in_data, i) /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_operator.py:3898: SyntaxWarning: "is" with a literal. Did you mean "=="? npy_out_backward = np.sign(in_data) if order is 1 else in_data/npy_out /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_operator.py:3914: SyntaxWarning: "is" with a literal. Did you mean "=="? npy_out = l1norm(in_data, (i, i+1)) if order is 1 else l2norm(in_data, (i, i+1)) /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_operator.py:3915: SyntaxWarning: "is" with a literal. Did you mean "=="? npy_out_backward = np.sign(in_data) if order is 1 else in_data/npy_out /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2293: SyntaxWarning: "is" with a literal. Did you mean "=="? if len(func_data) is 4: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2763: SyntaxWarning: "is" with a literal. Did you mean "=="? if axis_size is 0: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2775: SyntaxWarning: "is" with a literal. Did you mean "=="? sections = 7 if shape[axis] is 0 else shape[axis] /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2816: SyntaxWarning: "is" with a literal. Did you mean "=="? if axis_size is 0: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2836: SyntaxWarning: "is" with a literal. Did you mean "=="? sections = 7 if x.shape[axis] is 0 else random.randint(1,x.shape[axis]) /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2871: SyntaxWarning: "is" with a literal. Did you mean "=="? if axis_size is 0: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:2888: SyntaxWarning: "is" with a literal. Did you mean "=="? sections = 7 if axis_size is 0 else axis_size /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_numpy_op.py:6194: SyntaxWarning: "is" with a literal. Did you mean "=="? if axis is -1: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_random.py:579: SyntaxWarning: "is" with a literal. Did you mean "=="? if len(x.shape) is 1: /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_ndarray.py:487: SyntaxWarning: "is" with a literal. Did you mean "=="? if default_context().device_type is 'gpu': /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_ndarray.py:1043: SyntaxWarning: "is" with a literal. Did you mean "=="? if default_context().device_type is 'gpu': /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_operator.py:363: SyntaxWarning: "is" with a literal. Did you mean "=="? if ((lhs_stype is 'default' and rhs_stype is 'row_sparse') or /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_operator.py:363: SyntaxWarning: "is" with a literal. Did you mean "=="? if ((lhs_stype is 'default' and rhs_stype is 'row_sparse') or /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_operator.py:364: SyntaxWarning: "is" with a literal. Did you mean "=="? (lhs_stype is 'default' and rhs_stype is 'csr') or /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_operator.py:364: SyntaxWarning: "is" with a literal. Did you mean "=="? (lhs_stype is 'default' and rhs_stype is 'csr') or /home/ubuntu/src/mxnet-master/tests/python/gpu/../unittest/test_sparse_operator.py:365: SyntaxWarning: "is" with a literal. Did you mean "=="? (lhs_stype is 'row_sparse' and rhs_stype is 'row_sparse') and
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #14994: Flaky test: test_lstm_clip
ChaiBapchya commented on issue #14994: Flaky test: test_lstm_clip URL: https://github.com/apache/incubator-mxnet/issues/14994#issuecomment-592214753 For unrelated PR #17487 http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validation/pipelines/unix-gpu/branches/PR-17487/runs/21/nodes/431/log/?start=0 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience commented on issue #17665: No speedup from using FP16 (4 times slower than PyTorch)
sxjscience commented on issue #17665: No speedup from using FP16 (4 times slower than PyTorch) URL: https://github.com/apache/incubator-mxnet/issues/17665#issuecomment-592204581 I tried with `nvprof` and find that MXNet and PyTorch uses different kernels: For MXNet, it's `volta_fp16_sgemm_fp16_64x64_nn`. ``` ubuntu@ip-172-31-27-255:~$ sudo /usr/local/cuda/bin/nvprof python3 test_fp16.py /usr/lib/python3/dist-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters ==117922== NVPROF is profiling process 117922, command: python3 test_fp16.py 57.4354133605957 ==117922== Profiling application: python3 test_fp16.py ==117922== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 100.00% 57.3739s 100 573.74ms 572.92ms 594.30ms volta_fp16_sgemm_fp16_64x64_nn 0.00% 1.7993ms 3 599.78us 599.42us 600.26us _ZN5mxnet2op8mxnet_op20mxnet_generic_kernelINS1_11op_with_reqINS1_10set_to_intILi0EEELi1EEEJPN7mshadow4half6half_tviDpT0_ 0.00% 190.78us 100 1.9070us 1.7600us 6.8160us [CUDA memcpy DtoH] 0.00% 19.200us12 1.6000us 1.5360us 1.9520us [CUDA memcpy HtoD] 0.00% 11.424us 8 1.4280us 1.4080us 1.4720us [CUDA memset] API calls: 76.78% 57.3844s 203 282.68ms 6.2690us 594.29ms cudaStreamSynchronize ``` For PyTorch, it's `volta_fp16_s884gemm_fp16_256x128_ldg8_f2f_nn`. ``` ubuntu@ip-172-31-27-255:~$ vi test_fp16_pytorch.py ubuntu@ip-172-31-27-255:~$ sudo /usr/local/cuda/bin/nvprof python3 test_fp16_pytorch.py ==118113== NVPROF is profiling process 118113, command: python3 test_fp16_pytorch.py 8.097127437591553 ==118113== Profiling application: python3 test_fp16_pytorch.py ==118113== Profiling result: Type Time(%) Time Calls Avg Min Max Name GPU activities: 97.29% 8.08549s 100 80.855ms 80.561ms 93.579ms volta_fp16_s884gemm_fp16_256x128_ldg8_f2f_nn 2.71% 224.92ms 4 56.231ms 1.9200us 75.214ms [CUDA memcpy HtoD] 0.00% 186.40us 100 1.8640us 1.6640us 3.9680us [CUDA memcpy DtoH] API calls: 50.26% 8.30841s 103 80.664ms 74.913ms 93.269ms cudaMemcpyAsync 49.40% 8.16635s 6 1.36106s 9.3230us 8.16199s cudaMalloc 0.11% 18.989ms 1528 12.427us 714ns 479.17us cuDeviceGetAttribute 0.11% 17.890ms16 1.1181ms 1.0814ms 1.1642ms cudaGetDeviceProperties ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385399026 ## File path: src/operator/rnn_impl.h ## @@ -127,9 +127,9 @@ void LstmForwardTraining(DType* ws, bool state_outputs, const int L, const int D, Review comment: What are D and L ? can D*L be >5B ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385399105 ## File path: src/operator/rnn_impl.h ## @@ -146,15 +146,15 @@ void LstmForwardTraining(DType* ws, Tensor hx(hx_ptr, Shape3(total_layers, N, H)); Tensor cx(cx_ptr, Shape3(total_layers, N, H)); const int b_size = 2 * H * 4; Review comment: size_t ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385398150 ## File path: src/operator/rnn-inl.h ## @@ -361,9 +361,9 @@ void RNNBackward(DType* ws, DType* rs, const int num_layers, const int direction, - const int seq_length, - const int batch_size, - const int input_size, + const index_t seq_length, + const index_t batch_size, + const index_t input_size, Review comment: size_t? if its not a breaking chnage This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385397998 ## File path: src/operator/rnn-inl.h ## @@ -320,9 +320,9 @@ void RNNForwardInference(DType* ws, bool state_outputs, const int num_layers, const int direction, - const int seq_length, - const int batch_size, - const int input_size, + const index_t seq_length, + const index_t batch_size, + const index_t input_size, Review comment: size_t? if its not a breaking chnage This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385397900 ## File path: src/operator/rnn-inl.h ## @@ -278,9 +278,9 @@ void RNNForwardTraining(DType* ws, bool state_outputs, const int num_layers, const int direction, -const int seq_length, -const int batch_size, -const int input_size, +const index_t seq_length, +const index_t batch_size, +const index_t input_size, Review comment: size_t? if its not a breaking chnage This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385397647 ## File path: src/operator/rnn-inl.h ## @@ -213,8 +213,8 @@ inline size_t GetRNNWorkspaceSize(int seq_length, inline size_t GetRNNReserveSpaceSize(int num_layer, int direction, - int seq_length, - int batch_size, + index_t seq_length, + index_t batch_size, Review comment: size_t? if its not a breaking chnage This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385397242 ## File path: src/operator/rnn-inl.h ## @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter { }; inline int GetRnnParamSize(int num_layer, - int input_size, + index_t input_size, Review comment: size_t ? make sure API signature doesn't chage. If thats the case then keep it index_t This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385396554 ## File path: src/operator/rnn-inl.h ## @@ -213,8 +213,8 @@ inline size_t GetRNNWorkspaceSize(int seq_length, inline size_t GetRNNReserveSpaceSize(int num_layer, int direction, - int seq_length, - int batch_size, + index_t seq_length, + index_t batch_size, Review comment: size_t This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385396355 ## File path: src/operator/rnn-inl.h ## @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter { }; inline int GetRnnParamSize(int num_layer, - int input_size, + index_t input_size, Review comment: sry for changing my opinion again. Since its input size 'size_t' would be better here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385396554 ## File path: src/operator/rnn-inl.h ## @@ -213,8 +213,8 @@ inline size_t GetRNNWorkspaceSize(int seq_length, inline size_t GetRNNReserveSpaceSize(int num_layer, int direction, - int seq_length, - int batch_size, + index_t seq_length, + index_t batch_size, Review comment: size_t This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385396355 ## File path: src/operator/rnn-inl.h ## @@ -123,7 +123,7 @@ struct RNNParam : public dmlc::Parameter { }; inline int GetRnnParamSize(int num_layer, - int input_size, + index_t input_size, Review comment: sry for changing my opinion again. Since its input size 'size_t' would be better here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385393949 ## File path: src/operator/rnn-inl.h ## @@ -140,14 +140,14 @@ inline int GetRnnParamSize(int num_layer, size *= 3; break; } - int size1 = (input_size + state_size + 2) * size; // first layer size - int size2 = (state_size * direction + state_size + 2) * size; // other layers size + index_t size1 = (input_size + state_size + 2) * size; // first layer size Review comment: Lets prefer size_t for sizes. Or do you think these values can be negative too ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op
access2rohit commented on a change in pull request #17632: [Large Tensor] Fixed RNN op URL: https://github.com/apache/incubator-mxnet/pull/17632#discussion_r385394072 ## File path: src/operator/rnn-inl.h ## @@ -182,8 +182,8 @@ inline int GetRnnBiasSize(int num_layer, * - output -> h[t](, c[t] additionally with Lstm) time by time(sz: NxH(x2)) * - intermediate y[1...T] as next layer's inputs(sz: TxNxHxD) */ -inline size_t GetRNNWorkspaceSize(int seq_length, - int batch_size, +inline size_t GetRNNWorkspaceSize(index_t seq_length, + index_t batch_size, Review comment: @apeforest ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17712: Gluon Block hooks reference counting bug
leezu commented on issue #17712: Gluon Block hooks reference counting bug URL: https://github.com/apache/incubator-mxnet/issues/17712#issuecomment-592195594 Seems to be an upstream bug, so I opened https://bugs.python.org/issue39778 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu closed issue #17712: Gluon Block hooks reference counting bug
leezu closed issue #17712: Gluon Block hooks reference counting bug URL: https://github.com/apache/incubator-mxnet/issues/17712 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu opened a new issue #17712: Gluon Block hooks reference counting bug
leezu opened a new issue #17712: Gluon Block hooks reference counting bug URL: https://github.com/apache/incubator-mxnet/issues/17712 ## Description Garbage collection can trigger `_PyObject_AssertFailed` if Gluon Block hooks are used. ### Error Message ``` bash PYTHONTRACEMALLOC=1 ~/.pyenv/versions/3.8.2-debug/bin/python ~/test.py ~/src/mxnet-master/python master + ip-172-31-95-96 Modules/gcmodule.c:110: gc_decref: Assertion "gc_get_refs(g) > 0" failed: refcount is too small Memory block allocated at (most recent call first): File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/utils.py", line 398 object address : 0x7f3180489bf0 object refcount : 1 object type : 0x5568872ac1a0 object type name: weakref object repr : Fatal Python error: _PyObject_AssertFailed Python runtime state: initialized Current thread 0x7f31de3ad3c0 (most recent call first): File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 620 in register_forward_hook File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 789 in _register_summary_hook File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 637 in apply File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 636 in apply File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 636 in apply File "/home/ubuntu/src/mxnet-master/python/mxnet/gluon/block.py", line 798 in summary File "/home/ubuntu/test.py", line 5 in zsh: abort (core dumped) PYTHONTRACEMALLOC=1 ~/.pyenv/versions/3.8.2-debug/bin/python ~/test.py ``` ## To Reproduce To deterministically reproduce, apply this patch ``` diff diff --git a/python/mxnet/gluon/block.py b/python/mxnet/gluon/block.py index e925b31a2..220cc55ef 100644 --- a/python/mxnet/gluon/block.py +++ b/python/mxnet/gluon/block.py @@ -26,7 +26,7 @@ import warnings import re from collections import OrderedDict, defaultdict import numpy as np - +import gc from ..base import mx_real_t, MXNetError from .. import symbol, ndarray, initializer, np_symbol from ..symbol import Symbol @@ -617,6 +617,7 @@ class Block(object): """ handle = HookHandle() handle.attach(self._forward_hooks, hook) +gc.collect() return handle def apply(self, fn): ``` The issue will also occur non-deterministically when garbage collection is triggered in the background. ### Steps to reproduce 1. Use Python debug build. For example, with [pyenv](https://github.com/pyenv/pyenv), run `pyenv install --debug 3.8.2`. 2. Apply patch to MXNet 3. Run ``` python import mxnet as mx net = mx.gluon.model_zoo.vision.resnet50_v1() net.initialize() net.summary(mx.nd.ones((32, 3, 224, 224))) ``` ## Environment ``` --Python Info-- Version : 3.8.2 Compiler : GCC 7.4.0 Build: ('default', 'Feb 27 2020 20:16:52') Arch : ('64bit', 'ELF') Pip Info--- Version : 19.2.3 Directory: /home/ubuntu/.pyenv/versions/3.8.2-debug/lib/python3.8/site-packages/pip --MXNet Info--- Version : 1.6.0 Directory: /home/ubuntu/src/mxnet-master/python/mxnet Num GPUs : 0 Hashtag not found. Not installed from pre-built package. --System Info-- Platform : Linux-4.15.0-1058-aws-x86_64-with-glibc2.27 system : Linux node : ip-172-31-95-96 release : 4.15.0-1058-aws version : #60-Ubuntu SMP Wed Jan 15 22:35:20 UTC 2020 --Hardware Info-- machine : x86_64 processor: x86_64 Architecture:x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 96 On-line CPU(s) list: 0-95 Thread(s) per core: 2 Core(s) per socket: 24 Socket(s): 2 NUMA node(s):2 Vendor ID: GenuineIntel CPU family: 6 Model: 85 Model name: Intel(R) Xeon(R) Platinum 8275CL CPU @ 3.00GHz Stepping:7 CPU MHz: 1439.236 BogoMIPS:5999.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 32K L1i cache: 32K L2 cache:1024K L3 cache:36608K NUMA node0 CPU(s): 0-23,48-71 NUMA node1 CPU(s): 24-47,72-95 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc_known_freq pni pclmulqdq monitor ssse3 fma cx16 pcid sse4_1 sse4_2
[GitHub] [incubator-mxnet] snnn commented on issue #17711: [ONNX export] Fixing spatial export for batchnorm
snnn commented on issue #17711: [ONNX export] Fixing spatial export for batchnorm URL: https://github.com/apache/incubator-mxnet/pull/17711#issuecomment-592158271 LGTM. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] vinitra opened a new pull request #17711: [ONNX export] Fixing spatial export for batchnorm
vinitra opened a new pull request #17711: [ONNX export] Fixing spatial export for batchnorm URL: https://github.com/apache/incubator-mxnet/pull/17711 ## Description ## In the ONNX model zoo, we noticed that models like ArcFace and DUC that have been exported from mxnet with batchnorm operators are not treating spatial mode correctly. https://github.com/onnx/models/issues/156 https://github.com/onnx/models/issues/91#issuecomment-540312125 Quoting from [MxNet BatchNorm documentation](https://mxnet.apache.org/api/python/docs/tutorials/packages/gluon/training/normalization/index.html): "One of the most popular normalization techniques is Batch Normalization, usually called BatchNorm for short. We normalize the activations across all samples in a batch for each of the channels independently." The comment in the exporter refers to mean and variance per feature, instead of per channel. Fixing this will mean that spatial mode should be 1, instead of 0 in the ONNX export. Changing spatial to 1 fixed these models, in accordance with the issue referenced above. ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Fixing onnx export for batchnorm spatial mode ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] gromovadarya90 opened a new issue #17710: WANT TO SEE HOT PICS WITH ME?
gromovadarya90 opened a new issue #17710: WANT TO SEE HOT PICS WITH ME? URL: https://github.com/apache/incubator-mxnet/issues/17710 WANT TO SEE HOT PICS WITH ME? https://is.gd/ms4xm3 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r385342488 ## File path: benchmark/opperf/README.md ## @@ -72,6 +73,8 @@ python incubator-mxnet/benchmark/opperf/opperf.py --output-format json --output- 3. **dtype** : By default, `float32`. You can override and set the global dtype for all operator benchmarks. Example: --dtype float64. +4. **profiler** : By default, 'native'. You can override and set the global profiler for all operator benchmarks. Example: --profiler 'python'. Review comment: Added a line. @apeforest This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r379225052 ## File path: benchmark/opperf/utils/op_registry_utils.py ## @@ -137,26 +140,24 @@ def prepare_op_inputs(op, arg_params): arg_values[arg_name] = DEFAULTS_INPUTS["dtype_int"] elif (op.startswith(('random','sample')) or op in float_only) and arg_name == "dtype": arg_values[arg_name] = DEFAULTS_INPUTS["dtype_float"] -elif "NDArray" in arg_type and op == "ravel_multi_index": -arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"] elif op in custom_data and arg_name + "_" + op.lower() in DEFAULTS_INPUTS: arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_" + op.lower()] -elif "NDArray" in arg_type and arg_name + "_nd" in DEFAULTS_INPUTS: -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"] -elif "NDArray" in arg_type and op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS: +elif op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS: arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_4d"] -elif "NDArray" in arg_type and op in ops_3d and arg_name + "_3d" in DEFAULTS_INPUTS: -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"] -elif "NDArray" in arg_type and op == 'softmax_cross_entropy': -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"] +elif op in ops_dim1 and arg_name + "_dim1" in DEFAULTS_INPUTS: +arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_dim1"] +elif "NDArray" in arg_type: +if op == "ravel_multi_index": +arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"] +elif arg_name + "_nd" in DEFAULTS_INPUTS: +arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"] +elif op in ops_3d and arg_name + "_3d" in DEFAULTS_INPUTS: +arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"] +elif op == 'softmax_cross_entropy': +arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"] +# default case elif arg_name in DEFAULTS_INPUTS: arg_values[arg_name] = DEFAULTS_INPUTS[arg_name] -elif "float" in arg_type and arg_name + "_float" in DEFAULTS_INPUTS: -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_float"] -elif "Shape" in arg_type and arg_name + "_shape" in DEFAULTS_INPUTS: Review comment: Another non reachable if condition. axis and axis_shape both exist. And since `if arg_name in DEFAULTS_INPUTS:` is checked before ` arg_name + "_shape" in DEFAULTS_INPUTS` it will never be reached. Hence removed it. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #17665: No speedup from using FP16 (4 times slower than PyTorch)
sxjscience edited a comment on issue #17665: No speedup from using FP16 (4 times slower than PyTorch) URL: https://github.com/apache/incubator-mxnet/issues/17665#issuecomment-591832250 I can replicate the performance gap. Also, I added `mx.nd.waitall()` in the first script: ```python import mxnet as mx import numpy as np import time n = 2**14 ctx = mx.gpu(0) dtype = np.float16 with ctx: a = mx.nd.zeros((n, n), dtype=dtype) b = mx.nd.zeros((n, n), dtype=dtype) c = mx.nd.zeros((n, n), dtype=dtype) mx.nd.waitall() tic = time.time() for _ in range(100): mx.nd.dot(a, b, out=c) res = float(c[0, 0].asscalar()) # "use" the result print(time.time() - tic) ``` In one GPU of P3.16x: Time: `57.40008759498596`. The time spent by pytorch is `8.085056066513062`. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #17665: No speedup from using FP16 (4 times slower than PyTorch)
sxjscience edited a comment on issue #17665: No speedup from using FP16 (4 times slower than PyTorch) URL: https://github.com/apache/incubator-mxnet/issues/17665#issuecomment-591832250 I can replicate the performance gap. Also, I added `mx.nd.waitall()` in the first script: ```python import mxnet as mx import numpy as np import time n = 2**14 ctx = mx.gpu(0) dtype = np.float16 with ctx: a = mx.nd.zeros((n, n), dtype=dtype) b = mx.nd.zeros((n, n), dtype=dtype) c = mx.nd.zeros((n, n), dtype=dtype) mx.nd.waitall() tic = time.time() for _ in range(100): mx.nd.dot(a, b, out=c) res = float(c[0, 0].asscalar()) # "use" the result print(time.time() - tic) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators
ChaiBapchya commented on a change in pull request #17487: [OpPerf] Consolidate array manipulation related operators URL: https://github.com/apache/incubator-mxnet/pull/17487#discussion_r385322042 ## File path: benchmark/opperf/utils/op_registry_utils.py ## @@ -137,26 +140,24 @@ def prepare_op_inputs(op, arg_params): arg_values[arg_name] = DEFAULTS_INPUTS["dtype_int"] elif (op.startswith(('random','sample')) or op in float_only) and arg_name == "dtype": arg_values[arg_name] = DEFAULTS_INPUTS["dtype_float"] -elif "NDArray" in arg_type and op == "ravel_multi_index": -arg_values[arg_name] = DEFAULTS_INPUTS["ravel_data"] elif op in custom_data and arg_name + "_" + op.lower() in DEFAULTS_INPUTS: arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_" + op.lower()] -elif "NDArray" in arg_type and arg_name + "_nd" in DEFAULTS_INPUTS: -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_nd"] -elif "NDArray" in arg_type and op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS: +elif op in ops_4d and arg_name + "_4d" in DEFAULTS_INPUTS: arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_4d"] -elif "NDArray" in arg_type and op in ops_3d and arg_name + "_3d" in DEFAULTS_INPUTS: -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_3d"] -elif "NDArray" in arg_type and op == 'softmax_cross_entropy': -arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_smce"] +elif op in ops_dim1 and arg_name + "_dim1" in DEFAULTS_INPUTS: +arg_values[arg_name] = DEFAULTS_INPUTS[arg_name + "_dim1"] +elif "NDArray" in arg_type: Review comment: incorrect logic hence reverted This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #17487: [OpPerf] Consolidate array manipulation related operators
ChaiBapchya commented on issue #17487: [OpPerf] Consolidate array manipulation related operators URL: https://github.com/apache/incubator-mxnet/pull/17487#issuecomment-592133902 > Can you please this list - https://github.com/apache/incubator-mxnet/tree/master/benchmark/opperf/nd_operations @sandeep-krishnamurthy updated Thanks for pointing out. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sl1pkn07 opened a new pull request #17709: Fix #17704: move the .gitignore file to rigth path
sl1pkn07 opened a new pull request #17709: Fix #17704: move the .gitignore file to rigth path URL: https://github.com/apache/incubator-mxnet/pull/17709 …(this avoid to install it when run make install) ## Description ## Fix #17704 ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [x] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [x] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sl1pkn07 commented on issue #17704: cpp-package: .gitignore in wrong path
sl1pkn07 commented on issue #17704: cpp-package: .gitignore in wrong path URL: https://github.com/apache/incubator-mxnet/issues/17704#issuecomment-592124328 done This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 25caa25 Bump the publish timestamp. 25caa25 is described below commit 25caa2519d42ed17e9101a28f56b5e9f17ccdb53 Author: mxnet-ci AuthorDate: Thu Feb 27 18:40:27 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..9a21492 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Feb 27 18:40:27 UTC 2020
[GitHub] [incubator-mxnet] eric-haibin-lin commented on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches
eric-haibin-lin commented on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches URL: https://github.com/apache/incubator-mxnet/issues/17651#issuecomment-592112039 I think that's correct. Please let me know if you come across issues using that. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385289205 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -98,11 +98,11 @@ void CumsumForwardImpl(const OpContext& ctx, } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(in.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(in.type_flag_, index_t, { Review comment: Great point, reverted to IType. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385289319 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -157,10 +157,10 @@ void CumsumBackwardImpl(const OpContext& ctx, } } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, index_t, { Review comment: Great point, reverted to IType. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
connorgoggins commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385289461 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -157,10 +157,10 @@ void CumsumBackwardImpl(const OpContext& ctx, } } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, index_t, { MSHADOW_TYPE_SWITCH(ograd.type_flag_, OType, { Kernel::Launch( -s, igrad.Size() / middle, igrad.dptr(), +s, igrad.Size() / middle, igrad.dptr(), Review comment: Reverted, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
apeforest commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385286229 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -157,10 +157,10 @@ void CumsumBackwardImpl(const OpContext& ctx, } } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, index_t, { Review comment: You don't need a macro if the type is predefined. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
access2rohit commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385285544 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -157,10 +157,10 @@ void CumsumBackwardImpl(const OpContext& ctx, } } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, index_t, { Review comment: Can you revert this change as well ? rest LGTM ! This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] access2rohit commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
access2rohit commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385285641 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -157,10 +157,10 @@ void CumsumBackwardImpl(const OpContext& ctx, } } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(igrad.type_flag_, index_t, { MSHADOW_TYPE_SWITCH(ograd.type_flag_, OType, { Kernel::Launch( -s, igrad.Size() / middle, igrad.dptr(), +s, igrad.Size() / middle, igrad.dptr(), Review comment: Ditto This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] apeforest commented on a change in pull request #17677: [Large Tensor] Fix cumsum op
apeforest commented on a change in pull request #17677: [Large Tensor] Fix cumsum op URL: https://github.com/apache/incubator-mxnet/pull/17677#discussion_r385284895 ## File path: src/operator/numpy/np_cumsum-inl.h ## @@ -98,11 +98,11 @@ void CumsumForwardImpl(const OpContext& ctx, } Stream *s = ctx.get_stream(); - MSHADOW_TYPE_SWITCH_WITH_BOOL(in.type_flag_, IType, { + MSHADOW_TYPE_SWITCH_WITH_BOOL(in.type_flag_, index_t, { Review comment: I think you will not need this switch macro, right? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17704: cpp-package: .gitignore in wrong path
leezu commented on issue #17704: cpp-package: .gitignore in wrong path URL: https://github.com/apache/incubator-mxnet/issues/17704#issuecomment-592091298 It's valid to have .gitignore in subdirectories, so it's not necessary to change all. `cpp-package/include/mxnet-cpp/.gitignore` is copied together with the include files, so there is a good reason to fix `cpp-package/include/mxnet-cpp/.gitignore`. You could move it to `cpp-package/.gitignore` for example. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] leezu commented on issue #17708: Silence all compiler warnings when build
leezu commented on issue #17708: Silence all compiler warnings when build URL: https://github.com/apache/incubator-mxnet/issues/17708#issuecomment-592087728 @hzfan could you fix the warnings in ffi that you introduced. CI already tests with `-WError` to prevent introducing more problems, but that test needs to be changed to run with a newer compiler toolchain. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sl1pkn07 opened a new issue #17708: Silence all compiler warnings when build
sl1pkn07 opened a new issue #17708: Silence all compiler warnings when build URL: https://github.com/apache/incubator-mxnet/issues/17708 ## Description silence all warnings when build the project ## References when build the project, a tons of warnings like ~~~ tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(41): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(41): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(57): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(57): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(77): warning: extra ";" ignored /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(89): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(89): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./runtime/ffi_helper.h(99): warning: extra ";" ignored /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./node/container.h(45): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./node/container.h(45): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(43): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(43): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(94): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(94): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(144): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(144): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(189): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/./ir/expr.h(189): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/runtime/container.h(176): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/runtime/container.h(176): warning: type qualifier on return type is meaningless /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mxnet/runtime/container.h(276): warning: extra ";" ignored ~~~ and others like ~~~ [ 64%] Building CUDA object CMakeFiles/mxnet_static.dir/src/common/utils.cu.o /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/include/mshadow/./base.h(842): warning: integer conversion resulted in a change of sign In file included from /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/src/storage/../profiler/storage_profiler.h:25, from /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/src/storage/storage.cc:32: In member function 'void mxnet::profiler::static_string::set(const char*) [with long unsigned int string_size = 128]', inlined from 'mxnet::profiler::ProfileCounter::ProfileCounterStat::ProfileCounterStat(const char*, uint64_t)' at /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/src/storage/../profiler/./profiler.h:636:16, inlined from 'static std::unique_ptr::value, StatType>::type> mxnet::profiler::Profiler::CreateProfileStat(Args ...) [with StatType = mxnet::profiler::ProfileCounter::ProfileCounterStat; Args = {const char*, long unsigned int}]' at /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/src/storage/../profiler/./profiler.h:419:38, inlined from 'void mxnet::profiler::Profiler::AddNewProfileStat(SetExtraInfoFunction, Args ...) [with StatType = mxnet::profiler::ProfileCounter::ProfileCounterStat; SetExtraInfoFunction = mxnet::profiler::ProfileCounter::SendStat(uint64_t)::; Args = {const char*, long unsigned int}]' at /tmp/makepkg/sl1-mxnet-git/src/incubator-mxnet/src/storage/../profiler/./profiler.h:320:33, inlined from 'void mxnet::profiler::ProfileCounter::SendStat(uint64_t)' at
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. aaronmarkham pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 431890f Bump the publish timestamp. 431890f is described below commit 431890f2617fcd0284c024c0c36f08dd34bf3235 Author: mxnet-ci AuthorDate: Thu Feb 27 12:40:13 2020 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..41771b2 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Feb 27 12:40:13 UTC 2020
[GitHub] [incubator-mxnet] EmilPi edited a comment on issue #17454: IdentityAttachKLSparseReg - operator argument error
EmilPi edited a comment on issue #17454: IdentityAttachKLSparseReg - operator argument error URL: https://github.com/apache/incubator-mxnet/issues/17454#issuecomment-591908426 Any news on this? I have the same issue with mxnet==1.5.0 and mxnet-cu100==1.5.0 installed with pip3 . This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] EmilPi commented on issue #17454: IdentityAttachKLSparseReg - operator argument error
EmilPi commented on issue #17454: IdentityAttachKLSparseReg - operator argument error URL: https://github.com/apache/incubator-mxnet/issues/17454#issuecomment-591908426 Any news on this? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ciyongch commented on issue #17705: mkldnn quantized FC is slow
ciyongch commented on issue #17705: mkldnn quantized FC is slow URL: https://github.com/apache/incubator-mxnet/issues/17705#issuecomment-591868495 @eric-haibin-lin I just created a PR https://github.com/apache/incubator-mxnet/pull/17707 to address this issue, please take a review. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] ciyongch opened a new pull request #17707: [MKLDNN] Remove overhead of sg_mkldnn_fullyconnected op
ciyongch opened a new pull request #17707: [MKLDNN] Remove overhead of sg_mkldnn_fullyconnected op URL: https://github.com/apache/incubator-mxnet/pull/17707 ## Description ## This PR is mainly focus on removing the overhead of sg_mkldnn_fullyconnected especially in the case of channel-wise quantization mode (no much change for FP32 or tensor-wise quantization mode) described in https://github.com/apache/incubator-mxnet/issues/17705, via removing extra condition check in the current logic (weights version and calibrated data value), but leaving an ENV of `MXNET_MKLDNN_QFC_DYNAMIC_PARAMS` for the scenario of changing those values on the fly (we don't meet such usage but in case there is). ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the best of my knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here @TaoLv @eric-haibin-lin @pengzhao-intel This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sl1pkn07 commented on issue #17704: cpp-package: .gitignore in wrong path
sl1pkn07 commented on issue #17704: cpp-package: .gitignore in wrong path URL: https://github.com/apache/incubator-mxnet/issues/17704#issuecomment-591866571 ~~~ ┌─┤[$]|[sl1pkn07]|[sL1pKn07]|[~/aplicaciones/incubator-mxnet]| └───╼ find . -name .gitignore ./python/.gitignore ./python/mxnet/gluon/.gitignore ./amalgamation/.gitignore ./julia/models/Inception/.gitignore ./julia/.gitignore ./julia/examples/char-lstm/.gitignore ./julia/docs/.gitignore ./scala-package/.mvn/wrapper/.gitignore ./scala-package/.gitignore ./example/ssd/tools/caffe_converter/.gitignore ./example/neural-style/.gitignore ./example/gluon/lipnet/.gitignore ./example/cnn_text_classification/.gitignore ./example/recommenders/.gitignore ./.gitignore ./tools/coreml/pip_package/.gitignore ./tools/bandwidth/.gitignore ./tools/caffe_converter/.gitignore ./contrib/clojure-package/.gitignore ./contrib/clojure-package/src/org/apache/clojure_mxnet/gen/.gitignore ./contrib/clojure-package/examples/multi-label/.gitignore ./contrib/clojure-package/examples/imclassification/.gitignore ./contrib/clojure-package/examples/neural-style/.gitignore ./contrib/clojure-package/examples/bert/.gitignore ./contrib/clojure-package/examples/pre-trained-models/.gitignore ./contrib/clojure-package/examples/rnn/.gitignore ./contrib/clojure-package/examples/captcha/.gitignore ./contrib/clojure-package/examples/cnn-text-classification/.gitignore ./contrib/clojure-package/examples/tutorial/.gitignore ./contrib/clojure-package/examples/profiler/.gitignore ./contrib/clojure-package/examples/visualization/.gitignore ./contrib/clojure-package/examples/infer/objectdetector/.gitignore ./contrib/clojure-package/examples/infer/predictor/.gitignore ./contrib/clojure-package/examples/infer/imageclassifier/.gitignore ./contrib/clojure-package/examples/gan/.gitignore ./3rdparty/mshadow/guide/.gitignore ./3rdparty/mshadow/guide/exp-template/.gitignore ./3rdparty/mshadow/guide/mshadow-ps/.gitignore ./3rdparty/mshadow/.gitignore ./3rdparty/mshadow/mshadow-ps/.gitignore ./perl-package/.gitignore ./tests/nightly/.gitignore ./tests/cpp/.gitignore ./tests/.gitignore ./docker/.gitignore ./R-package/.gitignore ./docs/static_site/.gitignore ./docs/static_site/src/.gitignore ./docs/.gitignore ./docs/python_docs/python/.gitignore ./docs/python_docs/themes/.gitignore ~~~ also this .gitignore ? This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] sl1pkn07 closed pull request #17580: fix #17579
sl1pkn07 closed pull request #17580: fix #17579 URL: https://github.com/apache/incubator-mxnet/pull/17580 This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hkvision commented on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches
hkvision commented on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches URL: https://github.com/apache/incubator-mxnet/issues/17651#issuecomment-591834890 @eric-haibin-lin Hi, I'm using NDArrayIter. I checked ResizeIter, so in order that all my data are trained, I need to set the size to the largest batch among all workers and for workers with less batches, after finish all the data, it will iterate from the very beginning until the target size is reached. Point out if I'm wrong. :) Thanks so much and this can be a workaround for me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] [incubator-mxnet] hkvision edited a comment on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches
hkvision edited a comment on issue #17651: Distributed training with kvstore crashes if worker has different number of data batches URL: https://github.com/apache/incubator-mxnet/issues/17651#issuecomment-591834890 @eric-haibin-lin Hi, I'm using NDArrayIter. I checked ResizeIter and it looks great! So in order that all my data are trained, I need to set the size to the largest batch among all workers and for workers with less batches, after finish all the data, it will iterate from the very beginning until the target size is reached. Point out if I'm wrong. :) Thanks so much and this can be a good workaround for me. This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services