[GitHub] rahul003 commented on a change in pull request #11027: Add standard ResNet data augmentation for ImageRecordIter
rahul003 commented on a change in pull request #11027: Add standard ResNet data augmentation for ImageRecordIter URL: https://github.com/apache/incubator-mxnet/pull/11027#discussion_r203620309 ## File path: example/image-classification/common/data.py ## @@ -63,6 +65,20 @@ def add_data_aug_args(parser): help='max ratio to scale') aug.add_argument('--min-random-scale', type=float, default=1, help='min ratio to scale, should >= img_size/input_shape. otherwise use --pad-size') +aug.add_argument('--max-random-area', type=float, default=1, + help='max area to crop in random resized crop, whose range is [0, 1]') +aug.add_argument('--min-random-area', type=float, default=1, + help='min area to crop in random resized crop, whose range is [0, 1]') +aug.add_argument('--brightness', type=float, default=0, Review comment: @DickJC123 Hi Dick, I'm fixing these issues in https://github.com/apache/incubator-mxnet/pull/11533 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] rahul003 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial
rahul003 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial URL: https://github.com/apache/incubator-mxnet/pull/11533#issuecomment-406176396 @hetong007 please confirm the resnet-aug change to turn on random_mirror. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] rahul003 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial
rahul003 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial URL: https://github.com/apache/incubator-mxnet/pull/11533#issuecomment-406176474 @eric-haibin-lin The training curve image is now in dmlc repository. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] zhaowwenzhong opened a new issue #11815: 学习率调整策略怎么设置???
zhaowwenzhong opened a new issue #11815: 学习率调整策略怎么设置??? URL: https://github.com/apache/incubator-mxnet/issues/11815 各位大侠看看下面写法错哪里了? import mxnet.optimizer as optimizer 。。。 lr_scheduler = mx.lr_scheduler.PolyScheduler(base_lr = 0.1, pwr = 2, max_update = 1000) opt = optimizer.SGD(learning_rate= 0.1, momentum= 0.9, wd= 0.0005, rescale_grad= 1.0/4,lr_scheduler = lr_scheduler) def _batch_callback(param): print(param.locals['optimizer'].lr) model = mx.mod.Module(context = ctx, symbol = sym ) model.fit(train_dataiter, optimizer =opt, begin_epoch = begin_epoch, num_epoch = num_epoch, arg_params = arg_params, aux_params = aux_params, eval_metric = eval_metrics, allow_missing = True, batch_end_callback = _batch_callback, epoch_end_callback = mx.callback.do_checkpoint(prefix)) 训练过程中 lr的输出都是0.1,每次迭代后都是0.1 我不知道在训练过程中lr是否已经调整,我该如何输出在迭代过程中的学习率?? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] hetong007 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial
hetong007 commented on issue #11533: Fix image classification scripts and Improve Fp16 tutorial URL: https://github.com/apache/incubator-mxnet/pull/11533#issuecomment-406185114 For cifar training, the standard augmentation for benchmark is: ```python mean_rgb = [125.307, 122.961, 113.8575] std_rgb = [51.5865, 50.847, 51.255] train_data = mx.io.ImageRecordIter( path_imgrec = rec_train, path_imgidx = rec_train_idx, preprocess_threads = num_workers, shuffle = True, batch_size = batch_size, data_shape = (3, 32, 32), mean_r = mean_rgb[0], mean_g = mean_rgb[1], mean_b = mean_rgb[2], std_r = std_rgb[0], std_g = std_rgb[1], std_b = std_rgb[2], rand_mirror = True, pad = 4, fill_value = 0, rand_crop = True, max_crop_size = 32, min_crop_size = 32, ) val_data = mx.io.ImageRecordIter( path_imgrec = rec_val, path_imgidx = rec_val_idx, preprocess_threads = num_workers, shuffle = False, batch_size = batch_size, data_shape = (3, 32, 32), mean_r = mean_rgb[0], mean_g = mean_rgb[1], mean_b = mean_rgb[2], std_r = std_rgb[0], std_g = std_rgb[1], std_b = std_rgb[2], ) ``` Can you change accordingly? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch
lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch URL: https://github.com/apache/incubator-mxnet/pull/11780#discussion_r203629802 ## File path: src/operator/contrib/count_sketch.cu ## @@ -165,7 +165,8 @@ inline void CountSketchBackward(const Tensor &in_grad, nthreads, in_grad_ptr+bstart*in_dim, h_ptr, s_ptr, out_grad_ptr+bstart*out_dim, batchlen, in_dim, out_dim); -MSHADOW_CUDA_POST_KERNEL_CHECK(sketch_backward_kernel); +cudaError_t err = cudaDeviceSynchronize(); Review comment: DRY is not only about lines of code, it is more about maintaimability. Consider a case where there is an issue with the flow in line 168, and the developer decides to change it from cudaDeviceSynchronize to another API. Without DRY, the developer will need to remember to look for other places in the code that call the API. Sometimes, this will be missed. With DRY, different parts of the code are funneled to the one function or whatever that has this logic, and by updating in one location all code paths are updated. Makes sense? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch
lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch URL: https://github.com/apache/incubator-mxnet/pull/11780#discussion_r203629802 ## File path: src/operator/contrib/count_sketch.cu ## @@ -165,7 +165,8 @@ inline void CountSketchBackward(const Tensor &in_grad, nthreads, in_grad_ptr+bstart*in_dim, h_ptr, s_ptr, out_grad_ptr+bstart*out_dim, batchlen, in_dim, out_dim); -MSHADOW_CUDA_POST_KERNEL_CHECK(sketch_backward_kernel); +cudaError_t err = cudaDeviceSynchronize(); Review comment: DRY is not only about lines of code. Consider a case where there is an issue with the flow in line 168, and the developer decides to change it from cudaDeviceSynchronize to another API. Without DRY, the developer will need to remember to look for other places in the code that call the API. Sometimes, this will be missed. With DRY, different parts of the code are funneled to the one function or whatever that has this logic, and by updating in one location all code paths are updated. Makes sense? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch
lupesko commented on a change in pull request #11780: fix flaky test test_operator_gpu.test_countsketch URL: https://github.com/apache/incubator-mxnet/pull/11780#discussion_r203629802 ## File path: src/operator/contrib/count_sketch.cu ## @@ -165,7 +165,8 @@ inline void CountSketchBackward(const Tensor &in_grad, nthreads, in_grad_ptr+bstart*in_dim, h_ptr, s_ptr, out_grad_ptr+bstart*out_dim, batchlen, in_dim, out_dim); -MSHADOW_CUDA_POST_KERNEL_CHECK(sketch_backward_kernel); +cudaError_t err = cudaDeviceSynchronize(); Review comment: DRY is not only about lines of code, it is more about maintainability. Consider a case where there is an issue with the flow in line 168, and the developer decides to change it from cudaDeviceSynchronize to another API. Without DRY, the developer will need to remember to look for other places in the code that call the API. Sometimes, this will be missed. With DRY, different parts of the code are funneled to the one function or whatever that has this logic, and by updating in one location all code paths are updated. Makes sense? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] asmushetzel commented on issue #11630: Fix flaky test test_deconvolution
asmushetzel commented on issue #11630: Fix flaky test test_deconvolution URL: https://github.com/apache/incubator-mxnet/pull/11630#issuecomment-406197504 While the changes seem all valid and fine, this remains still a mystery as there should be no difference between the two APIs if all operands are float32 (as anirudh stated above). A bit confused though as the original PR description mentions float16 (why?). We definitely should try to get more info from NVidia in order to figure out what happened there. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] yifeim commented on issue #11651: Add logistic regression tutorial
yifeim commented on issue #11651: Add logistic regression tutorial URL: https://github.com/apache/incubator-mxnet/pull/11651#issuecomment-406199511 The tutorial looks awesome! A few comments: * Is it just me or everybody experiences training loss 8x larger than validation loss? I understand it may be possible due to unstable weights from backward update after every batch. But, still it is not my usual expectation. (Not to say that I have not seen similar issues before and I am still slightly puzzled by the root cause.) * The model class is linear, but the proposed network is a 3-layer overkill (which may be related to the larger training loss). If you want a fun problem, maybe consider a `xor` function class: https://medium.com/@jayeshbahire/the-xor-problem-in-neural-networks-50006411840b * Is this `mxnet.gluon.loss.LogisticLoss` equivalent to `SigmoidBCELoss`? Since you are using metrics, it may be worthwhile exploring `mx.metric.F1`. * There are some magic hyperparameters to be explained: a strong wd=0.01, an Xavier=2.34 initialization. As an elementary tutorial, I would try to simplify them unless they are part of the intended purposes. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] CodePlay2016 opened a new issue #11817: got 'Process finished with exit code 134 (interrupted by signal 6: SIGABRT)' when binding quantized module
CodePlay2016 opened a new issue #11817: got 'Process finished with exit code 134 (interrupted by signal 6: SIGABRT)' when binding quantized module URL: https://github.com/apache/incubator-mxnet/issues/11817 ## Description when i was binding the module built from a symbol quantized by the [quantization tool](https://github.com/apache/incubator-mxnet/tree/master/example/quantization), the console showed 'Process finished with exit code 134 (interrupted by signal 6: SIGABRT)' without any further information, which made me really confused. According to [this page](https://www.student.cs.uwaterloo.ca/~cs136/seashell/docs/seashell-error-codes.html), i have realized this exit code may caused by unreasonable access to memory, such as OOM and index out of range, but i just find it hard to locate where the error occurs. Please help. It's worth mentioning that i can bind the symbol before quantization without errors. ## Environment info (Required) ``` --Python Info-- ('Version :', '2.7.12') ('Compiler :', 'GCC 5.4.0 20160609') ('Build:', ('default', 'Nov 19 2016 06:48:10')) ('Arch :', ('64bit', 'ELF')) Pip Info--- ('Version :', '10.0.1') ('Directory:', '/usr/local/lib/python2.7/dist-packages/pip') --MXNet Info--- ('Version :', '1.2.0') ('Directory:', '/usr/local/lib/python2.7/dist-packages/mxnet') ('Commit Hash :', '297c64fd2ee404612aa3ecc880b940fb2538039c') --System Info-- ('Platform :', 'Linux-4.4.0-87-generic-x86_64-with-Ubuntu-16.04-xenial') ('system :', 'Linux') ('node :', 'BoHong') ('release :', '4.4.0-87-generic') ('version :', '#110-Ubuntu SMP Tue Jul 18 12:55:35 UTC 2017') --Hardware Info-- ('machine :', 'x86_64') ('processor:', 'x86_64') Architecture: x86_64 CPU op-mode(s):32-bit, 64-bit Byte Order:Little Endian CPU(s):48 On-line CPU(s) list: 0-47 Thread(s) per core:2 Core(s) per socket:12 Socket(s): 2 NUMA node(s): 2 Vendor ID: GenuineIntel CPU family:6 Model: 79 Model name:Intel(R) Xeon(R) CPU E5-2650 v4 @ 2.20GHz Stepping: 1 CPU MHz: 2508.429 CPU max MHz: 2900. CPU min MHz: 1200. BogoMIPS: 4401.31 Virtualization:VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 30720K NUMA node0 CPU(s): 0-11,24-35 NUMA node1 CPU(s): 12-23,36-47 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch epb intel_pt tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 hle avx2 smep bmi2 erms invpcid rtm cqm rdseed adx smap xsaveopt cqm_llc cqm_occup_llc cqm_mbm_total cqm_mbm_local dtherm ida arat pln pts --Network Test-- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0190 sec, LOAD: 1.5759 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0134 sec, LOAD: 9.3883 sec. Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.2021 sec, LOAD: 1.9859 sec. Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0132 sec, LOAD: 1.3754 sec. Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.4865 sec, LOAD: 3.5648 sec. Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.4228 sec, LOAD: 1.7980 sec. ``` Using Python ## Error Message: ``` Process finished with exit code 134 (interrupted by signal 6: SIGABRT) ``` ## Minimum reproducible example ``` ctx = mx.gpu(2) # cqsym, qarg_params, aux_params = _get_quantized(model_path,ctx) prefix = os.path.join(model_path,'new_sym_quantized') cqsym, qarg_params, aux_params = load_model(prefix) mod2 = mx.mod.Module(symbol=cqsym, context=ctx,label_names=None) mod2.bind(data_shapes=[('data', (32, 3, 112, 96))],for_training=False) mod2.set_params(qarg_params,aux_params) ``` the symbol and parameter file can be find in this [repo](https://github.com/CodePlay2016/prune_mx_face), named `new_sym_quantized.json` and `new_sym_quantized.param`, the script is `quantization.py` ## Steps to reproduce
[GitHub] cclauss opened a new pull request #11818: Add trove classifiers for PyPI page
cclauss opened a new pull request #11818: Add trove classifiers for PyPI page URL: https://github.com/apache/incubator-mxnet/pull/11818 https://pypi.org/project/mxnet does not currently provide information about supported languages and Python versions. This PR proposes to add PyPI [trove classifiers](https://packaging.python.org/specifications/core-metadata/?highlight=trove#classifier-multiple-use) to make this clear. For an example, see the bottom left of https://pypi.org/project/requests ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cclauss commented on issue #11813: fix #11810
cclauss commented on issue #11813: fix #11810 URL: https://github.com/apache/incubator-mxnet/pull/11813#issuecomment-406221188 This PR could have a more descriptive / helpful name. ;-) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] cclauss opened a new issue #11819: Is there a way to stop a Jenkins test run?
cclauss opened a new issue #11819: Is there a way to stop a Jenkins test run? URL: https://github.com/apache/incubator-mxnet/issues/11819 Sometimes when making a commit to a PR, I see that I have made a mistake? It would be helpful if I could cancel a Jenkins job and make a new commit. It seems like a waste of resources to continue a Jenkins job when I already know the associated commit is not a good one. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] zhenpingfeng commented on issue #8866: src/operator/./bilinear_sampler-inl.h:105: Have not implemented the data req combinations! gdata_req=0 ggrid_req=1
zhenpingfeng commented on issue #8866: src/operator/./bilinear_sampler-inl.h:105: Have not implemented the data req combinations! gdata_req=0 ggrid_req=1 URL: https://github.com/apache/incubator-mxnet/issues/8866#issuecomment-406273602 any news? I had this bug too. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ifeherva opened a new pull request #11820: [MXNET-524] Broadcast like operator
ifeherva opened a new pull request #11820: [MXNET-524] Broadcast like operator URL: https://github.com/apache/incubator-mxnet/pull/11820 ## Description ## Operator, which can output a broadcasted array for the given target. This allows easier broadcasting and hybridization. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KellenSunderland opened a new pull request #11821: Tensorrt integration 8 - WIP
KellenSunderland opened a new pull request #11821: Tensorrt integration 8 - WIP URL: https://github.com/apache/incubator-mxnet/pull/11821 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] larroy commented on issue #11579: Refactor armv8 builds
larroy commented on issue #11579: Refactor armv8 builds URL: https://github.com/apache/incubator-mxnet/pull/11579#issuecomment-406294602 @marcoabreu please merge This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] mdtdev opened a new issue #11822: Install from pre-built binaries failing
mdtdev opened a new issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822 Note: Providing complete information in the most concise form is the best way to get help. This issue template serves as the checklist for essential information to most of the technical issues and bug reports. For non-technical issues and feature requests, feel free to present the information in what you believe is the best form. For Q & A and discussion, please start a discussion thread at https://discuss.mxnet.io ## Description Following the instructions at: https://mxnet.apache.org/install/index.html?platform=MacOS&language=R&processor=CPU fail to install mxnet. ## Environment info (Required) ``` --Python Info-- Version : 3.6.3 Compiler : GCC 4.2.1 Compatible Clang 4.0.1 (tags/RELEASE_401/final) Build: ('default', 'Oct 6 2017 12:04:38') Arch : ('64bit', '') Pip Info--- Version : 10.0.1 Directory: /Users/XX/anaconda3/lib/python3.6/site-packages/pip --MXNet Info--- No MXNet installed. --System Info-- Platform : Darwin-17.6.0-x86_64-i386-64bit system : Darwin node : Ninshubur.local release : 17.6.0 version : Darwin Kernel Version 17.6.0: Tue May 8 15:22:16 PDT2018; root:xnu-4570.61.1~1/RELEASE_X86_64 --Hardware Info-- machine : x86_64 processor: i386 b'machdep.cpu.brand_string: Intel(R) Core(TM) i5-4278U CPU @ 2.60GHz' b'machdep.cpu.features: FPU VME DE PSE TSC MSR PAE MCE CX8 APIC SEP MTRR PGE MCA CMOV PAT PSE36 CLFSH DS ACPI MMX FXSR SSE SSE2 SS HTT TMPBE SSE3 PCLMULQDQ DTES64 MON DSCPL VMX EST TM2 SSSE3 FMA CX16 TPR PDCM SSE4.1 SSE4.2 x2APIC MOVBE POPCNT AES PCID XSAVE OSXSAVE SEGLIM64TSCTMR AVX1.0 RDRAND F16C' b'machdep.cpu.leaf7_features: SMEP ERMS RDWRFSGS TSC_THREAD_OFFSET BMI1 AVX2 BMI2 INVPCID FPU_CSDS' b'machdep.cpu.extfeatures: SYSCALL XD 1GBPAGE EM64T LAHF LZCNT RDTSCP TSCI' --Network Test-- Setting timeout: 10 Timing for MXNet: https://github.com/apache/incubator-mxnet, DNS: 0.0234 sec, LOAD: 0.4682 sec. Timing for Gluon Tutorial(en): http://gluon.mxnet.io, DNS: 0.0465 sec, LOAD: 0.4947 sec. Timing for Gluon Tutorial(cn): https://zh.gluon.ai, DNS: 0.0796 sec,LOAD: 0.5364 sec. Timing for FashionMNIST: https://apache-mxnet.s3-accelerate.dualstack.amazonaws.com/gluon/dataset/fashion-mnist/train-labels-idx1-ubyte.gz, DNS: 0.0474 sec, LOAD: 0.6981 sec. Timing for PYPI: https://pypi.python.org/pypi/pip, DNS: 0.0288 sec, LOAD: 0.6707 sec. Timing for Conda: https://repo.continuum.io/pkgs/free/, DNS: 0.0281 sec, LOAD: 0.2254 sec. ``` Package used (Python/R/Scala/Julia): R. Session info follows... ``` > sessionInfo() R version 3.5.1 (2018-07-02) Platform: x86_64-apple-darwin15.6.0 (64-bit) Running under: macOS High Sierra 10.13.5 Matrix products: default BLAS: /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib LAPACK: /Library/Frameworks/R.framework/Versions/3.5/Resources/lib/libRlapack.dylib locale: [1] en_US.UTF-8/en_US.UTF-8/en_US.UTF-8/C/en_US.UTF-8/en_US.UTF-8 attached base packages: [1] stats graphics grDevices utils datasets methods [7] base other attached packages: [1] topicmodels_0.2-7 quanteda_1.3.0 loaded via a namespace (and not attached): [1] Rcpp_0.12.17 lubridate_1.7.4lattice_0.20-35 [4] tidyr_0.8.1visNetwork_2.0.4 assertthat_0.2.0 [7] digest_0.6.15 slam_0.1-43R6_2.2.2 [10] plyr_1.8.4 stats4_3.5.1 ggplot2_3.0.0 [13] pillar_1.3.0 rlang_0.2.1lazyeval_0.2.1 [16] rstudioapi_0.7 data.table_1.11.4 Matrix_1.2-14 [19] DiagrammeR_1.0.0 downloader_0.4 readr_1.1.1 [22] stringr_1.3.1 htmlwidgets_1.2igraph_1.2.1 [25] munsell_0.5.0 compiler_3.5.1 influenceR_0.1.0 [28] rgexf_0.15.3 spacyr_0.9.9 pkgconfig_2.0.1 [31] htmltools_0.3.6tidyselect_0.2.4 tibble_1.4.2 [34] gridExtra_2.3 XML_3.98-1.12 viridisLite_0.3.0 [37] crayon_1.3.4 dplyr_0.7.6grid_3.5.1 [40] jsonlite_1.5 gtable_0.2.0 magrittr_1.5 [43] scales_0.5.0 RcppParallel_4.4.0 stringi_1.2.3 [46] viridis_0.5.1 bindrcpp_0.2.2 NLP_0.1-11 [49] xml2_1.2.0 stopwords_0.9.0brew_1.0-6 [52] fastmatch_1.1-0RColorBrewer_1.1-2 tools_3.5.1 [55] stm_1.3.3 glue_1.2.0 purrr_0.2.5 [58] hms_0.4.2 wordVectors_2.0Rook_1.1-1 [61] rsconnect_0.8.8parallel_3.5.1 yaml_2.1.19
[GitHub] mdtdev commented on issue #11822: Install from pre-built binaries failing
mdtdev commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406310446 Let me add the note that this was discovered in a workshop and problems showed up for both Windows boxes and Macs, however the Windows boxes are loading the libraries and failing to work after that. I cannot provide that info, and most people at this workshop cannot use their command lines much less have github for providing info. Not sure if that is relevant. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy commented on issue #11819: Is there a way to stop a Jenkins test run?
nswamy commented on issue #11819: Is there a way to stop a Jenkins test run? URL: https://github.com/apache/incubator-mxnet/issues/11819#issuecomment-406313512 @cclauss Thanks for bringing this to our attention, You are right its waste of resources and also delays your new changes getting tested. On another note, we generally discourage contributors from creating WIP PRs, I hope thats not the case here. @marcoabreu shouldn't this already happen? Could you please prioritize, I have experienced this as well. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] marcoabreu closed pull request #11579: Refactor armv8 builds
marcoabreu closed pull request #11579: Refactor armv8 builds URL: https://github.com/apache/incubator-mxnet/pull/11579 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/Jenkinsfile b/Jenkinsfile index 1f65e7739cf..18de715cdee 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -489,9 +489,9 @@ try { } } }, -'Raspberry / ARMv7':{ +'ARMv7':{ node('mxnetlinux-cpu') { -ws('workspace/build-raspberry-armv7') { +ws('workspace/build-ARMv7') { timeout(time: max_time, unit: 'MINUTES') { init_git() docker_run('armv7', 'build_armv7', false) @@ -499,9 +499,9 @@ try { } } }, -'Raspberry / ARMv6':{ +'ARMv6':{ node('mxnetlinux-cpu') { -ws('workspace/build-raspberry-armv6') { +ws('workspace/build-ARMv6') { timeout(time: max_time, unit: 'MINUTES') { init_git() docker_run('armv6', 'build_armv6', false) @@ -509,12 +509,22 @@ try { } } }, -'Android / ARM64':{ +'ARMv8':{ + node('mxnetlinux-cpu') { +ws('workspace/build-ARMv8') { + timeout(time: max_time, unit: 'MINUTES') { +init_git() +docker_run('armv8', 'build_armv8', false) + } +} + } +}, +'Android / ARMv8':{ node('mxnetlinux-cpu') { ws('workspace/android64') { timeout(time: max_time, unit: 'MINUTES') { init_git() -docker_run('android_arm64', 'build_android_arm64', false) +docker_run('android_armv8', 'build_android_armv8', false) } } } diff --git a/ci/docker/Dockerfile.build.android_arm64 b/ci/docker/Dockerfile.build.android_armv8 similarity index 100% rename from ci/docker/Dockerfile.build.android_arm64 rename to ci/docker/Dockerfile.build.android_armv8 diff --git a/ci/docker/Dockerfile.build.arm64 b/ci/docker/Dockerfile.build.armv8 similarity index 85% rename from ci/docker/Dockerfile.build.arm64 rename to ci/docker/Dockerfile.build.armv8 index fd87bf0fa6c..458b62ee094 100755 --- a/ci/docker/Dockerfile.build.arm64 +++ b/ci/docker/Dockerfile.build.armv8 @@ -28,6 +28,10 @@ ENV TARGET ARMV8 WORKDIR /work/deps +# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 +RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list + COPY install/ubuntu_arm.sh /work/ RUN /work/ubuntu_arm.sh diff --git a/ci/docker/Dockerfile.build.jetson b/ci/docker/Dockerfile.build.jetson index f3cf1112c34..cfb5a3fd4da 100755 --- a/ci/docker/Dockerfile.build.jetson +++ b/ci/docker/Dockerfile.build.jetson @@ -31,6 +31,11 @@ ENV ARCH aarch64 ENV HOSTCC gcc ENV TARGET ARMV8 +# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 +RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list + + WORKDIR /work/deps COPY install/ubuntu_arm.sh /work/ diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh index 566ff18204d..c899fe5c145 100755 --- a/ci/docker/runtime_functions.sh +++ b/ci/docker/runtime_functions.sh @@ -127,6 +127,10 @@ report_ccache_usage() { popd } +# +# ARM builds +# + build_armv6() { set -ex pushd . @@ -192,25 +196,7 @@ build_armv7() { popd } -build_amzn_linux_cpu() { -cd /work/build -cmake \ --DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ --DCMAKE_C_COMPILER_LAUNCHER=ccache \ --DUSE_CUDA=OFF\ --DUSE_OPENCV=ON\ --DUSE_OPENMP=ON\ --DUSE_SIGNAL_HANDLER=ON\ --DCMAKE_BUILD_TYPE=RelWithDebInfo\ --DUSE_MKL_IF_AVAILABLE=OFF\ --DUSE_LAPACK=OFF\ --DUSE_DIST_KVSTORE=ON\ --G Ninja /work/mxnet -ninja -v -report_ccache_usage -} - -build_arm64() { +build_armv8() { cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ @@ -218,6 +204,7 @@ build_arm64() { -DSUPPORT_F16C=OFF\ -DUSE_OPENCV=OFF\ -DUSE_OPENMP=OFF\ +-DUSE_LAPACK=OFF\ -DUSE_SIGNAL_HANDLER=ON\ -DCMAKE_BUILD_TYPE=Release\ -DUSE_MKL_IF_AVAILABLE=OFF\ @@ -227,12 +214,14 @@ build_arm64() { build_wheel } + +# +# ANDROID builds +# + build_android_armv7() { set -ex cd /work/build -#-DCMAKE_SYSTEM_NAME=Android\ -#-DCMAKE_ANDROID_NDK=${CROSS_ROOT} \ -#-DCMAKE_SYSTEM_VERSION=21\ cmake \ -DANDROID=ON\ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ @@ -251,15 +240,9 @@ build_android_armv7() {
[incubator-mxnet] branch master updated: Refactor armv8 builds and add to CI (#11579)
This is an automated email from the ASF dual-hosted git repository. marcoabreu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new bbcf083 Refactor armv8 builds and add to CI (#11579) bbcf083 is described below commit bbcf0838badfb1c3fc7488e6a00cf4013b52fda7 Author: Pedro Larroy <928489+lar...@users.noreply.github.com> AuthorDate: Thu Jul 19 17:18:04 2018 +0200 Refactor armv8 builds and add to CI (#11579) --- Jenkinsfile| 22 +--- ...ndroid_arm64 => Dockerfile.build.android_armv8} | 0 ...ckerfile.build.arm64 => Dockerfile.build.armv8} | 4 ++ ci/docker/Dockerfile.build.jetson | 5 ++ ci/docker/runtime_functions.sh | 62 +++--- 5 files changed, 57 insertions(+), 36 deletions(-) diff --git a/Jenkinsfile b/Jenkinsfile index 49b7ca9..e81fb50 100644 --- a/Jenkinsfile +++ b/Jenkinsfile @@ -489,9 +489,9 @@ try { } } }, -'Raspberry / ARMv7':{ +'ARMv7':{ node('mxnetlinux-cpu') { -ws('workspace/build-raspberry-armv7') { +ws('workspace/build-ARMv7') { timeout(time: max_time, unit: 'MINUTES') { init_git() docker_run('armv7', 'build_armv7', false) @@ -499,9 +499,9 @@ try { } } }, -'Raspberry / ARMv6':{ +'ARMv6':{ node('mxnetlinux-cpu') { -ws('workspace/build-raspberry-armv6') { +ws('workspace/build-ARMv6') { timeout(time: max_time, unit: 'MINUTES') { init_git() docker_run('armv6', 'build_armv6', false) @@ -509,12 +509,22 @@ try { } } }, -'Android / ARM64':{ +'ARMv8':{ + node('mxnetlinux-cpu') { +ws('workspace/build-ARMv8') { + timeout(time: max_time, unit: 'MINUTES') { +init_git() +docker_run('armv8', 'build_armv8', false) + } +} + } +}, +'Android / ARMv8':{ node('mxnetlinux-cpu') { ws('workspace/android64') { timeout(time: max_time, unit: 'MINUTES') { init_git() -docker_run('android_arm64', 'build_android_arm64', false) +docker_run('android_armv8', 'build_android_armv8', false) } } } diff --git a/ci/docker/Dockerfile.build.android_arm64 b/ci/docker/Dockerfile.build.android_armv8 similarity index 100% rename from ci/docker/Dockerfile.build.android_arm64 rename to ci/docker/Dockerfile.build.android_armv8 diff --git a/ci/docker/Dockerfile.build.arm64 b/ci/docker/Dockerfile.build.armv8 similarity index 85% rename from ci/docker/Dockerfile.build.arm64 rename to ci/docker/Dockerfile.build.armv8 index fd87bf0..458b62e 100755 --- a/ci/docker/Dockerfile.build.arm64 +++ b/ci/docker/Dockerfile.build.armv8 @@ -28,6 +28,10 @@ ENV TARGET ARMV8 WORKDIR /work/deps +# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 +RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list + COPY install/ubuntu_arm.sh /work/ RUN /work/ubuntu_arm.sh diff --git a/ci/docker/Dockerfile.build.jetson b/ci/docker/Dockerfile.build.jetson index f3cf111..cfb5a3f 100755 --- a/ci/docker/Dockerfile.build.jetson +++ b/ci/docker/Dockerfile.build.jetson @@ -31,6 +31,11 @@ ENV ARCH aarch64 ENV HOSTCC gcc ENV TARGET ARMV8 +# gh issue #11567 https://github.com/apache/incubator-mxnet/issues/11567 +RUN sed -i '\#deb http://cdn-fastly.deb.debian.org/debian-security jessie/updates main#d' /etc/apt/sources.list +RUN sed -i 's/cdn-fastly.//' /etc/apt/sources.list + + WORKDIR /work/deps COPY install/ubuntu_arm.sh /work/ diff --git a/ci/docker/runtime_functions.sh b/ci/docker/runtime_functions.sh index 566ff18..c899fe5 100755 --- a/ci/docker/runtime_functions.sh +++ b/ci/docker/runtime_functions.sh @@ -127,6 +127,10 @@ report_ccache_usage() { popd } +# +# ARM builds +# + build_armv6() { set -ex pushd . @@ -192,25 +196,7 @@ build_armv7() { popd } -build_amzn_linux_cpu() { -cd /work/build -cmake \ --DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ --DCMAKE_C_COMPILER_LAUNCHER=ccache \ --DUSE_CUDA=OFF\ --DUSE_OPENCV=ON\ --DUSE_OPENMP=ON\ --DUSE_SIGNAL_HANDLER=ON\ --DCMAKE_BUILD_TYPE=RelWithDebInfo\ --DUSE_MKL_IF_AVAILABLE=OFF\ --DUSE_LAPACK=OFF\ --DUSE_DIST_KVSTORE=ON\ --G Ninja /work/mxnet -ninja -v -report_ccache_usage -} - -build_arm64() { +build_armv8() { cmake \ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \ -DCMAKE_C_COMPILER_LAUNCHER=ccache \ @@ -218,6 +204,7 @@ build_arm64() { -DSUPPORT_F16C=OFF\ -DUSE_OPENCV=OFF\ -DUSE_OPENMP=OFF\ +-DUSE_LAPACK=OFF\
[GitHub] nswamy commented on issue #11817: got 'Process finished with exit code 134 (interrupted by signal 6: SIGABRT)' when binding quantized module
nswamy commented on issue #11817: got 'Process finished with exit code 134 (interrupted by signal 6: SIGABRT)' when binding quantized module URL: https://github.com/apache/incubator-mxnet/issues/11817#issuecomment-406315591 @reminisce could you please help, please feel free to change the labels if isn't a bug. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466#discussion_r203776018 ## File path: src/operator/nn/softmax-inl.h ## @@ -71,12 +71,22 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +if (temperature == 1.0) { Review comment: could you add a comment when you have a separate if for 1.0 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] marcoabreu commented on issue #11813: Fix broken website build pipeline
marcoabreu commented on issue #11813: Fix broken website build pipeline URL: https://github.com/apache/incubator-mxnet/pull/11813#issuecomment-406323078 I have changed the name This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vandanavk commented on issue #11526: Bug in mxnet.contrib.text.utils.count_tokens_from_str
vandanavk commented on issue #11526: Bug in mxnet.contrib.text.utils.count_tokens_from_str URL: https://github.com/apache/incubator-mxnet/issues/11526#issuecomment-406323891 The documentation update (https://github.com/apache/incubator-mxnet/pull/11800) has been merged. Can this be closed? @Neutron3529 @sandeep-krishnamurthy This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
apeforest commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466#discussion_r203778887 ## File path: src/operator/nn/softmax-inl.h ## @@ -71,12 +71,22 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +if (temperature == 1.0) { Review comment: By default the value of temperature is 1.0. Users will use other values only during reinforcement training cases. For CPU, the compiler cannot optimize this "divide-by-1.0" computation at runtime. Therefore I added a branch here. The performance difference is calibrated using an example shown in the Description of this PR. This branch is not added in GPU kernel because branching will add extra overhead for GPU. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466#discussion_r203776018 ## File path: src/operator/nn/softmax-inl.h ## @@ -71,12 +71,22 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +if (temperature == 1.0) { Review comment: could you add a comment why you are branching for 1.0. And also the fact this is not useful for GPU. Generally its a good practice to add comments around obvious code that need special handling or code that you might have discovered after scratching your head :) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] junrushao1994 opened a new pull request #11823: Fix flaky test for while_loop
junrushao1994 opened a new pull request #11823: Fix flaky test for while_loop URL: https://github.com/apache/incubator-mxnet/pull/11823 ## Description ## In the unittest for the newly-added while_loop, we do multiplication many many steps, which causes numeric overflow. This PR fixes this by simply reducing number of steps taken. ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [x] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [x] Changes happen only in ./tests/python/unittest/test_contrib_control_flow.py ## Comments ## - No comments This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
apeforest commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466#discussion_r203784722 ## File path: src/operator/nn/softmax-inl.h ## @@ -71,12 +71,22 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +if (temperature == 1.0) { Review comment: Thanks to your suggestion, I have added two comments in the code to make it clear for other developers in the future. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
nswamy commented on a change in pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466#discussion_r203784706 ## File path: src/operator/nn/softmax-inl.h ## @@ -71,12 +71,22 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +if (temperature == 1.0) { Review comment: I meant in the code :) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] szha commented on a change in pull request #11818: Add trove classifiers for PyPI page
szha commented on a change in pull request #11818: Add trove classifiers for PyPI page URL: https://github.com/apache/incubator-mxnet/pull/11818#discussion_r203789656 ## File path: python/setup.py ## @@ -106,4 +106,20 @@ def config_cython(): data_files=[('mxnet', [LIB_PATH[0]])], url='https://github.com/apache/incubator-mxnet', ext_modules=config_cython(), + classifiers=[ + # https://pypi.org/pypi?%3Aaction=list_classifiers + 'Development Status :: 5 - Production/Stable', + 'License :: OSI Approved :: Apache Software License', + 'Programming Language :: C++', + 'Programming Language :: Cython', + 'Programming Language :: Other', # R, Scala + 'Programming Language :: Perl', + 'Programming Language :: Python', + 'Programming Language :: Python :: 2.7', + 'Programming Language :: Python :: 3.4', + 'Programming Language :: Python :: 3.5', + 'Programming Language :: Python :: 3.6', + 'Programming Language :: Python :: 3.7', Review comment: This hasn't been tested. Items to finish before claiming 3.7 support can be found in https://github.com/apache/incubator-mxnet/projects/12 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] anirudh2290 commented on issue #11630: Fix flaky test test_deconvolution
anirudh2290 commented on issue #11630: Fix flaky test test_deconvolution URL: https://github.com/apache/incubator-mxnet/pull/11630#issuecomment-406334980 @asmushetzel sorry about the confusion. To clarify, the documentation states that both APIs should behave exactly the same when all tensors have float32 dtype. I am guessing that the difference in behavior could then be because cublassgemm is converting tensors to float16 dtype (a bug?) for computation and could be the reason for the precision issues. I have asked Nvidia folks for inputs but haven't heard back from them yet. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] szha commented on a change in pull request #11818: Add trove classifiers for PyPI page
szha commented on a change in pull request #11818: Add trove classifiers for PyPI page URL: https://github.com/apache/incubator-mxnet/pull/11818#discussion_r203790966 ## File path: python/setup.py ## @@ -106,4 +106,20 @@ def config_cython(): data_files=[('mxnet', [LIB_PATH[0]])], url='https://github.com/apache/incubator-mxnet', ext_modules=config_cython(), + classifiers=[ + # https://pypi.org/pypi?%3Aaction=list_classifiers + 'Development Status :: 5 - Production/Stable', + 'License :: OSI Approved :: Apache Software License', + 'Programming Language :: C++', + 'Programming Language :: Cython', + 'Programming Language :: Other', # R, Scala + 'Programming Language :: Perl', + 'Programming Language :: Python', + 'Programming Language :: Python :: 2.7', + 'Programming Language :: Python :: 3.4', + 'Programming Language :: Python :: 3.5', + 'Programming Language :: Python :: 3.6', + 'Programming Language :: Python :: 3.7', Review comment: Also, maybe add some other classifiers too? Examples from our friend tensorflow: https://pypi.org/project/tensorflow/ This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] marcoabreu commented on a change in pull request #11814: [MXAPPS-581] Nightly Straight Dope tests.
marcoabreu commented on a change in pull request #11814: [MXAPPS-581] Nightly Straight Dope tests. URL: https://github.com/apache/incubator-mxnet/pull/11814#discussion_r203793830 ## File path: ci/docker/runtime_functions.sh ## @@ -880,6 +880,41 @@ nightly_test_javascript() { make -C /work/mxnet/amalgamation libmxnet_predict.js MIN=1 EMCC=/work/deps/emscripten/emcc } +# Nightly 'MXNet: The Straight Dope' Single-GPU Tests +set_up_nightly_straight_dope_tests() { +set -ex +cd /work/mxnet/tests/nightly/straight_dope +rm -rf ./straight_dope_book +git clone https://github.com/zackchase/mxnet-the-straight-dope straight_dope_book Review comment: Would it be possible to do this as part of the test class init? That way, the test would be self contained and doesn't need any bootstrapping. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated: Fix broken website build pipeline (#11813)
This is an automated email from the ASF dual-hosted git repository. marcoabreu pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new ee40f7c Fix broken website build pipeline (#11813) ee40f7c is described below commit ee40f7c9290ee7a43c78365008fdcfd85e5607f3 Author: Sheng Zha AuthorDate: Thu Jul 19 09:40:51 2018 -0700 Fix broken website build pipeline (#11813) --- docs/Jenkinsfile | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/Jenkinsfile b/docs/Jenkinsfile index 175f637..ef0755f 100644 --- a/docs/Jenkinsfile +++ b/docs/Jenkinsfile @@ -51,7 +51,7 @@ try { ws('workspace/docs') { init_git() timeout(time: max_time, unit: 'MINUTES') { -sh "ci/build.py -p ubuntu_cpu --docker-registry ${env.DOCKER_CACHE_REGISTRY} %USE_NVIDIA% --docker-build-retries 3 /work/runtime_functions.sh build_docs ${params.tags_to_build} ${params.tag_list} ${params.tag_default} ${params.domain}" +sh "ci/build.py -p ubuntu_cpu --docker-registry ${env.DOCKER_CACHE_REGISTRY} --docker-build-retries 3 /work/runtime_functions.sh build_docs ${params.tags_to_build} ${params.tag_list} ${params.tag_default} ${params.domain}" archiveArtifacts 'docs/build_version_doc/artifacts.tgz' build 'restricted-website-publish' }
[GitHub] marcoabreu closed pull request #11813: Fix broken website build pipeline
marcoabreu closed pull request #11813: Fix broken website build pipeline URL: https://github.com/apache/incubator-mxnet/pull/11813 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] azai91 commented on issue #11778: [MXNET-483] C++ tests for mkldnn convolution operator
azai91 commented on issue #11778: [MXNET-483] C++ tests for mkldnn convolution operator URL: https://github.com/apache/incubator-mxnet/pull/11778#issuecomment-406340818 @zheng-da can you review This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] szha commented on issue #11810: website build and deploy failing
szha commented on issue #11810: website build and deploy failing URL: https://github.com/apache/incubator-mxnet/issues/11810#issuecomment-406342358 Fix has been merged. I triggered a build with updated parameters at http://jenkins.mxnet-ci.amazon-ml.com/job/restricted-website-build/123/ This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] eric-haibin-lin commented on a change in pull request #11643: Added the diag() operator
eric-haibin-lin commented on a change in pull request #11643: Added the diag() operator URL: https://github.com/apache/incubator-mxnet/pull/11643#discussion_r203798464 ## File path: docs/api/python/ndarray/ndarray.md ## @@ -131,6 +131,7 @@ The `ndarray` package provides several classes: NDArray.flatten NDArray.expand_dims NDArray.split +NDArray.diag ``` Review comment: Sorry I didn't make it clear - there're two places to add per file. For `ndarray.md`, One is NDArray.diag (fluent method) and the other is (ndarray.)diag at line 360. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ankkhedia commented on issue #11822: Install from pre-built binaries failing
ankkhedia commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406346186 hi @mdtdev Could you please provide me the minimum reproducible code which fails in Windows? @anirudhacharya Could you please help look into Mac issue? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KellenSunderland opened a new pull request #11824: WIP - Tensorrt integration 9
KellenSunderland opened a new pull request #11824: WIP - Tensorrt integration 9 URL: https://github.com/apache/incubator-mxnet/pull/11824 WIP - Tensorrt integration 9 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ifeherva commented on a change in pull request #11643: Added the diag() operator
ifeherva commented on a change in pull request #11643: Added the diag() operator URL: https://github.com/apache/incubator-mxnet/pull/11643#discussion_r203805894 ## File path: docs/api/python/ndarray/ndarray.md ## @@ -131,6 +131,7 @@ The `ndarray` package provides several classes: NDArray.flatten NDArray.expand_dims NDArray.split +NDArray.diag ``` Review comment: Done This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] eric-haibin-lin commented on issue #11811: Duplication of Operators for sampling from random distributions
eric-haibin-lin commented on issue #11811: Duplication of Operators for sampling from random distributions URL: https://github.com/apache/incubator-mxnet/issues/11811#issuecomment-406352505 I think the recommended APIs are listed in https://mxnet.incubator.apache.org/versions/master/api/python/ndarray/random.html#random-distribution-generator Why are you looking into this? are you trying to add a new one? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] junrushao1994 commented on issue #11823: Fix flaky test for while_loop
junrushao1994 commented on issue #11823: Fix flaky test for while_loop URL: https://github.com/apache/incubator-mxnet/pull/11823#issuecomment-406353736 @szha @zheng-da Could you help review this PR? Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] szha closed pull request #11823: Fix flaky test for while_loop
szha closed pull request #11823: Fix flaky test for while_loop URL: https://github.com/apache/incubator-mxnet/pull/11823 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/tests/python/unittest/test_contrib_control_flow.py b/tests/python/unittest/test_contrib_control_flow.py index 9dd5c4397be..1cc5b21ac86 100644 --- a/tests/python/unittest/test_contrib_control_flow.py +++ b/tests/python/unittest/test_contrib_control_flow.py @@ -15,17 +15,16 @@ # specific language governing permissions and limitations # under the License. +import numpy as np import mxnet as mx from mxnet import gluon -import numpy as np -import copy -from numpy.testing import assert_allclose -import unittest -from mxnet.test_utils import almost_equal, default_context -from numpy.testing import assert_allclose as assert_almost_equal # This is more restrictive +from numpy.testing import assert_allclose, assert_array_equal +from mxnet.test_utils import * from mxnet.base import _as_list +from common import with_seed +@with_seed() def test_while_loop_simple_forward(): class _TestBlock(gluon.HybridBlock): @@ -244,13 +243,14 @@ def _zeros_like_dict(name_list): assert_almost_equal(imp_grad, sym_grad, rtol=1e-4, atol=1e-4) +@with_seed() def test_while_loop_for_foreach(): def make_true_cond(): -return lambda loop_vars, _: (loop_vars[0] < 1e200).prod() +return lambda loop_vars, _: (loop_vars[0] < 1e35).prod() def make_false_cond(): -return lambda loop_vars, _: (loop_vars[0] > 1e200).prod() +return lambda loop_vars, _: (loop_vars[0] > 1e35).prod() def make_for_cond(length): return lambda loop_vars, _: loop_vars[0] < length @@ -613,8 +613,8 @@ def step(loop, free): (1, ), # a (1, ), # b ], -max_iterations=23, -n_steps=23, +max_iterations=5, +n_steps=5, ) # Case 1.2.* case_1( @@ -626,8 +626,8 @@ def step(loop, free): (2, 3, 4), # a (2, 3, 4), # b ], -max_iterations=31, -n_steps=31, +max_iterations=3, +n_steps=3, ) # Case 1.3.* case_1( @@ -644,7 +644,7 @@ def step(loop, free): ) # Case 2.1.* case_2( -cond=make_for_cond(length=31), +cond=make_for_cond(length=5), loop_var_shapes=[ (1, ), # i (2, ), # s @@ -654,11 +654,11 @@ def step(loop, free): (2, ), # f_1 (3, 4, 5, 6), # f_2, unused ], -n_steps=31, +n_steps=5, ) # Case 2.2.* case_2( -cond=make_for_cond(length=25), +cond=make_for_cond(length=3), loop_var_shapes=[ (1, ), # i (2, ), # s @@ -668,12 +668,12 @@ def step(loop, free): (2, ), # f_1 (3, 4, 5, 6), # f_2, unused ], -n_steps=25, +n_steps=3, ) # Case 3.* case_3( -length=11, -cond=make_for_cond(length=11), +length=5, +cond=make_for_cond(length=5), loop_var_shapes=[ (1, ), # i (2, ), # s_0 @@ -685,7 +685,7 @@ def step(loop, free): (2, ), # f_0 (3, 4, 5, 6), # f_1, unused ], -n_steps=11, +n_steps=5, ) # Case 4.1.* case_4( @@ -784,6 +784,7 @@ def step(loop, free): ) +@with_seed() def test_while_loop_nested(): def _to_np_list(arrays): @@ -891,6 +892,7 @@ def _get_sym_result(is_train, args, args_grad, out_grad): assert_almost_equal(x, y, rtol=1e-4, atol=1e-4) +@with_seed() def test_while_loop_rnn(): def _array(shape): return mx.nd.random.uniform(-1.0, 1.0, shape=shape) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 opened a new pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests
haojin2 opened a new pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests URL: https://github.com/apache/incubator-mxnet/pull/11825 ## Description ## Fix for #10709 ## Checklist ## ### Essentials ### - [x] Changes are complete (i.e. I finished coding on this PR) - [x] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [x] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [x] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [x] Change dot(csr.T, dns) = dns to a deterministic algorithm - [x] Test for determinism of dot(csr.T, dns) = dns ## Comments ## New determinism test passed 1 times on local machine. Correctness check, which takes longer time to run (~5s/trial), have already passed more than 100 times at this moment. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet] branch master updated: Initial commit (#11823)
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new d86f954 Initial commit (#11823) d86f954 is described below commit d86f95498c74988de860ca93ca503ab68fe1f387 Author: Junru Shao AuthorDate: Thu Jul 19 10:27:20 2018 -0700 Initial commit (#11823) --- tests/python/unittest/test_contrib_control_flow.py | 40 -- 1 file changed, 21 insertions(+), 19 deletions(-) diff --git a/tests/python/unittest/test_contrib_control_flow.py b/tests/python/unittest/test_contrib_control_flow.py index 9dd5c43..1cc5b21 100644 --- a/tests/python/unittest/test_contrib_control_flow.py +++ b/tests/python/unittest/test_contrib_control_flow.py @@ -15,17 +15,16 @@ # specific language governing permissions and limitations # under the License. +import numpy as np import mxnet as mx from mxnet import gluon -import numpy as np -import copy -from numpy.testing import assert_allclose -import unittest -from mxnet.test_utils import almost_equal, default_context -from numpy.testing import assert_allclose as assert_almost_equal # This is more restrictive +from numpy.testing import assert_allclose, assert_array_equal +from mxnet.test_utils import * from mxnet.base import _as_list +from common import with_seed +@with_seed() def test_while_loop_simple_forward(): class _TestBlock(gluon.HybridBlock): @@ -244,13 +243,14 @@ def _verify_while_loop(cond, func, loop_var_shapes, free_var_shapes, is_train, m assert_almost_equal(imp_grad, sym_grad, rtol=1e-4, atol=1e-4) +@with_seed() def test_while_loop_for_foreach(): def make_true_cond(): -return lambda loop_vars, _: (loop_vars[0] < 1e200).prod() +return lambda loop_vars, _: (loop_vars[0] < 1e35).prod() def make_false_cond(): -return lambda loop_vars, _: (loop_vars[0] > 1e200).prod() +return lambda loop_vars, _: (loop_vars[0] > 1e35).prod() def make_for_cond(length): return lambda loop_vars, _: loop_vars[0] < length @@ -613,8 +613,8 @@ def test_while_loop_for_foreach(): (1, ), # a (1, ), # b ], -max_iterations=23, -n_steps=23, +max_iterations=5, +n_steps=5, ) # Case 1.2.* case_1( @@ -626,8 +626,8 @@ def test_while_loop_for_foreach(): (2, 3, 4), # a (2, 3, 4), # b ], -max_iterations=31, -n_steps=31, +max_iterations=3, +n_steps=3, ) # Case 1.3.* case_1( @@ -644,7 +644,7 @@ def test_while_loop_for_foreach(): ) # Case 2.1.* case_2( -cond=make_for_cond(length=31), +cond=make_for_cond(length=5), loop_var_shapes=[ (1, ), # i (2, ), # s @@ -654,11 +654,11 @@ def test_while_loop_for_foreach(): (2, ), # f_1 (3, 4, 5, 6), # f_2, unused ], -n_steps=31, +n_steps=5, ) # Case 2.2.* case_2( -cond=make_for_cond(length=25), +cond=make_for_cond(length=3), loop_var_shapes=[ (1, ), # i (2, ), # s @@ -668,12 +668,12 @@ def test_while_loop_for_foreach(): (2, ), # f_1 (3, 4, 5, 6), # f_2, unused ], -n_steps=25, +n_steps=3, ) # Case 3.* case_3( -length=11, -cond=make_for_cond(length=11), +length=5, +cond=make_for_cond(length=5), loop_var_shapes=[ (1, ), # i (2, ), # s_0 @@ -685,7 +685,7 @@ def test_while_loop_for_foreach(): (2, ), # f_0 (3, 4, 5, 6), # f_1, unused ], -n_steps=11, +n_steps=5, ) # Case 4.1.* case_4( @@ -784,6 +784,7 @@ def test_while_loop_for_foreach(): ) +@with_seed() def test_while_loop_nested(): def _to_np_list(arrays): @@ -891,6 +892,7 @@ def test_while_loop_nested(): assert_almost_equal(x, y, rtol=1e-4, atol=1e-4) +@with_seed() def test_while_loop_rnn(): def _array(shape): return mx.nd.random.uniform(-1.0, 1.0, shape=shape)
[GitHub] haojin2 commented on issue #10709: [SPARSE] undeterministic result of sparse dot(csr, dense, transpose_a=True)
haojin2 commented on issue #10709: [SPARSE] undeterministic result of sparse dot(csr, dense, transpose_a=True) URL: https://github.com/apache/incubator-mxnet/issues/10709#issuecomment-406354148 @eric-haibin-lin fix in #11825 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on issue #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests
haojin2 commented on issue #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests URL: https://github.com/apache/incubator-mxnet/pull/11825#issuecomment-406354258 @eric-haibin-lin Please give a review when you have time, thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] anirudhacharya commented on issue #11811: Duplication of Operators for sampling from random distributions
anirudhacharya commented on issue #11811: Duplication of Operators for sampling from random distributions URL: https://github.com/apache/incubator-mxnet/issues/11811#issuecomment-406354362 I was debugging the sample.multinomial API in R bindings which is broken. On digging a bit deeper I found that there were multiple implementations of the samplers from the different random distributions in the C++ backend. so I wanted to know which was the correct one to use and wanted to know why we had multiple implementations for the same functionality. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] mdtdev commented on issue #11822: Install from pre-built binaries failing
mdtdev commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406355929 Unfortunately no, @ankkhedia, I cannot. I have asked one of the people with a Windows box to do so. Well, actually maybe I can, they got the install to look ok, and the library mounted ok, but it failed to let them try the test code: ``` library(mxnet) a <- mx.nd.ones(c(2,3), ctx = mx.cpu()) ``` The error happened after the `a <-` line, but I do not have access to the error message. Sorry! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] hetong007 commented on issue #11822: Install from pre-built binaries failing
hetong007 commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406358076 @mdtdev In your system terminal, please run ``` brew install openblas opencv ``` If it complains, please try to execute with `sudo`, as ``` sudo brew install openblas opencv ``` then try to install the package again. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] junrushao1994 commented on issue #11760: [MXNET-684] Add `ifelse` operator
junrushao1994 commented on issue #11760: [MXNET-684] Add `ifelse` operator URL: https://github.com/apache/incubator-mxnet/pull/11760#issuecomment-406358391 @zheng-da @piiswrong @szha @eric-haibin-lin Hey could you help review this PR? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ankkhedia commented on issue #11822: Install from pre-built binaries failing
ankkhedia commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406360104 @mdtdev The above instructions for Windows works for me and I am not getting error. Might be some additional dependencies required but cannot comment exactly unless we get hold of the exact error message. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] hetong007 commented on issue #11822: Install from pre-built binaries failing
hetong007 commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406360512 @ankkhedia The exact error message is there: ``` Library not loaded: /usr/local/opt/openblas/lib/libopenblasp-r0.3.1.dylib ``` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vishaalkapoor opened a new pull request #11826: Adding missing optimizers that use momentum.
vishaalkapoor opened a new pull request #11826: Adding missing optimizers that use momentum. URL: https://github.com/apache/incubator-mxnet/pull/11826 ## Description ## This addresses an omission in the image classification examples where signum and lbsgd are not listed as optimizers having momentum. Resolves issue 11620. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] samskalicky opened a new pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect
samskalicky opened a new pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827 problem was that softsign was computed using outputs instead of inputs added inputs to list, and changed what gets passed into softsign calculation added softsign test to test_activation function code reviewed by anirudh ## Description ## (Brief description on what this PR is about) ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change ### Changes ### - [ ] Feature1, tests, (and when applicable, API doc) - [ ] Feature2, tests, (and when applicable, API doc) ## Comments ## - If this change is a backward incompatible change, why must this change be made. - Interesting edge cases to note here This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] mdtdev commented on issue #11822: Install from pre-built binaries failing
mdtdev commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406361218 @hetong007 Thank you, that appears to have fixed it. Is that something missing from the installation and setup instructions? Or did I miss something big somewhere? But for me it seems to have fixed it, so thanks! ``` library(mxnet) a <- mx.nd.ones(c(2,3), ctx = mx.cpu()) b <- a * 2 + 1 b ``` shows the desired test result. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11224: ‘make lint’ is broken under python2
apeforest commented on issue #11224: ‘make lint’ is broken under python2 URL: https://github.com/apache/incubator-mxnet/issues/11224#issuecomment-406361446 @TaoLv PR has been merged. Please pull the latest dmlc-core module and verify. Thanks. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] vishaalkapoor commented on issue #11620: LBSGD does not have momentum
vishaalkapoor commented on issue #11620: LBSGD does not have momentum URL: https://github.com/apache/incubator-mxnet/issues/11620#issuecomment-406361680 @thomelane I've submitted a pull request that adds signum and lbsgd to the list of optimizers with momentum. Please take a look: [link](https://github.com/apache/incubator-mxnet/pull/11826) This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] ankkhedia commented on issue #11822: Install from pre-built binaries failing
ankkhedia commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406361771 @hetong007 I was talking about Windows error which @mdtdev mentioned in the issue description. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] hetong007 commented on issue #11822: Install from pre-built binaries failing
hetong007 commented on issue #11822: Install from pre-built binaries failing URL: https://github.com/apache/incubator-mxnet/issues/11822#issuecomment-406362933 @mdtdev It'll be a temporary fix. We are working to resolve the dependency issues. @ankkhedia You are correct, I overlooked that piece. I don't recall similar reports after the update two weeks ago. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] eric-haibin-lin commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests
eric-haibin-lin commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests URL: https://github.com/apache/incubator-mxnet/pull/11825#discussion_r203820656 ## File path: src/operator/tensor/dot-inl.cuh ## @@ -573,12 +581,113 @@ inline void DotCsrDnsDnsImpl(const OpContext& ctx, data_out.dptr(), data_l.dptr(), indptr_l.dptr(), Review comment: Since DotCsrTransDnsDnsWarpKernel, DotCsrTransDnsDnsThreadBlockKernel, and DotCsrTransDnsDnsWarpBlockKernel are nondeterministic and cannot be used, we might as well just remove them. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests
haojin2 commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests URL: https://github.com/apache/incubator-mxnet/pull/11825#discussion_r203823765 ## File path: src/operator/tensor/dot-inl.cuh ## @@ -573,12 +581,113 @@ inline void DotCsrDnsDnsImpl(const OpContext& ctx, data_out.dptr(), data_l.dptr(), indptr_l.dptr(), Review comment: Sure I can do that. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11285: Crash while running gluon image-classification.py example with float16
apeforest commented on issue #11285: Crash while running gluon image-classification.py example with float16 URL: https://github.com/apache/incubator-mxnet/issues/11285#issuecomment-406368075 @Ishitori I ran your example and got a different stack trace as reported: ``` File "/home/ubuntu/src/mxnet/example/image-classification/common/fit.py", line 316, in fit monitor=monitor) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/base_module.py", line 520, in fit self.update_metric(eval_metric, data_batch.label) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/module.py", line 757, in update_metric self._exec_group.update_metric(eval_metric, labels) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/executor_group.py", line 616, in update_metric eval_metric.update_dict(labels_, preds) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 304, in update_dict metric.update_dict(labels, preds) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 132, in update_dict self.update(label, pred) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 418, in update pred_label = pred_label.asnumpy().astype('int32') File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/ndarray/ndarray.py", line 1876, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/base.py", line 149, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [18:13:32] src/operator/nn/./cudnn/cudnn_convolution-inl.h:836: Failed to find any forward convolution algorithm. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest removed a comment on issue #11285: Crash while running gluon image-classification.py example with float16
apeforest removed a comment on issue #11285: Crash while running gluon image-classification.py example with float16 URL: https://github.com/apache/incubator-mxnet/issues/11285#issuecomment-406368075 @Ishitori I ran your example and got a different stack trace as reported: ``` File "/home/ubuntu/src/mxnet/example/image-classification/common/fit.py", line 316, in fit monitor=monitor) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/base_module.py", line 520, in fit self.update_metric(eval_metric, data_batch.label) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/module.py", line 757, in update_metric self._exec_group.update_metric(eval_metric, labels) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/module/executor_group.py", line 616, in update_metric eval_metric.update_dict(labels_, preds) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 304, in update_dict metric.update_dict(labels, preds) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 132, in update_dict self.update(label, pred) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/metric.py", line 418, in update pred_label = pred_label.asnumpy().astype('int32') File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/ndarray/ndarray.py", line 1876, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/.virtualenvs/mxnet/lib/python3.5/site-packages/mxnet/base.py", line 149, in check_call raise MXNetError(py_str(_LIB.MXGetLastError())) mxnet.base.MXNetError: [18:13:32] src/operator/nn/./cudnn/cudnn_convolution-inl.h:836: Failed to find any forward convolution algorithm. ``` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 0287a32 Bump the publish timestamp. 0287a32 is described below commit 0287a32cf5354c05149c7426d21cf0e8232eb7fb Author: mxnet-ci AuthorDate: Thu Jul 19 18:47:21 2018 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..89851e2 --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Jul 19 18:47:21 UTC 2018
[GitHub] aaronmarkham commented on issue #11810: website build and deploy failing
aaronmarkham commented on issue #11810: website build and deploy failing URL: https://github.com/apache/incubator-mxnet/issues/11810#issuecomment-406377609 Thanks Sheng. The website build just finished successfully. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] aaronmarkham closed issue #11810: website build and deploy failing
aaronmarkham closed issue #11810: website build and deploy failing URL: https://github.com/apache/incubator-mxnet/issues/11810 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sandeep-krishnamurthy commented on issue #7229: How to visualize intermediate layer output that are learnt during training process?
sandeep-krishnamurthy commented on issue #7229: How to visualize intermediate layer output that are learnt during training process? URL: https://github.com/apache/incubator-mxnet/issues/7229#issuecomment-406378110 MXBoard should solve this use case - https://github.com/awslabs/mxboard This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] sandeep-krishnamurthy closed issue #7229: How to visualize intermediate layer output that are learnt during training process?
sandeep-krishnamurthy closed issue #7229: How to visualize intermediate layer output that are learnt during training process? URL: https://github.com/apache/incubator-mxnet/issues/7229 This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] anirudh2290 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect
anirudh2290 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827#discussion_r203837194 ## File path: src/operator/nn/activation.cc ## @@ -44,11 +44,18 @@ struct ActivationGrad { const std::vector& ograds) const { std::vector heads(ograds.begin(), ograds.end()); heads.emplace_back(nnvm::NodeEntry{n, activation::kOut, 0}); -#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1) + const NodeAttrs& attrs = n->attrs; +if (dmlc::get(attrs.parsed).act_type == activation::kSoftSign) { Review comment: nit: cache act_type to avoid repeated calls .. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] anirudh2290 commented on issue #11827: fix for bug #10868: _backward_softsign activation is incorrect
anirudh2290 commented on issue #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827#issuecomment-406380993 @nswamy @eric-haibin-lin this fixes #10868 . Please take a look. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests
haojin2 commented on a change in pull request #11825: Fix undeterminism of dot(csr.T, dns) = dns with tests URL: https://github.com/apache/incubator-mxnet/pull/11825#discussion_r203840604 ## File path: src/operator/tensor/dot-inl.cuh ## @@ -573,12 +581,113 @@ inline void DotCsrDnsDnsImpl(const OpContext& ctx, data_out.dptr(), data_l.dptr(), indptr_l.dptr(), Review comment: Done. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect
haojin2 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827#discussion_r203842648 ## File path: src/operator/nn/activation.cc ## @@ -44,11 +44,18 @@ struct ActivationGrad { const std::vector& ograds) const { std::vector heads(ograds.begin(), ograds.end()); heads.emplace_back(nnvm::NodeEntry{n, activation::kOut, 0}); -#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1) + const NodeAttrs& attrs = n->attrs; +if (dmlc::get(attrs.parsed).act_type == activation::kSoftSign) { + // for softsign need the inputs to compute the activation. + heads.push_back(n->inputs[activation::kData]); +} + +#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1) // for ReLU, no need to pass input data. This enables inplace optimization during the // forward pass. -if (dmlc::get(attrs.parsed).act_type != activation::kReLU) { +if (dmlc::get(attrs.parsed).act_type != activation::kReLU && + dmlc::get(attrs.parsed).act_type != activation::kSoftSign) { Review comment: nit: alignment with the line above: ``` if (dmlc ... dmlc ...) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect
haojin2 commented on a change in pull request #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827#discussion_r203842648 ## File path: src/operator/nn/activation.cc ## @@ -44,11 +44,18 @@ struct ActivationGrad { const std::vector& ograds) const { std::vector heads(ograds.begin(), ograds.end()); heads.emplace_back(nnvm::NodeEntry{n, activation::kOut, 0}); -#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1) + const NodeAttrs& attrs = n->attrs; +if (dmlc::get(attrs.parsed).act_type == activation::kSoftSign) { + // for softsign need the inputs to compute the activation. + heads.push_back(n->inputs[activation::kData]); +} + +#if (MXNET_USE_CUDNN == 1 || MXNET_USE_MKLDNN == 1) // for ReLU, no need to pass input data. This enables inplace optimization during the // forward pass. -if (dmlc::get(attrs.parsed).act_type != activation::kReLU) { +if (dmlc::get(attrs.parsed).act_type != activation::kReLU && + dmlc::get(attrs.parsed).act_type != activation::kSoftSign) { Review comment: nit: alignment with the upper line: ``` if (dmlc ... dmlc ...) ``` This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy closed pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator
nswamy closed pull request #11466: [MXNET-560] Add temperature parameter in Softmax operator URL: https://github.com/apache/incubator-mxnet/pull/11466 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index ec32adf9000..e44c9677e00 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -172,5 +172,6 @@ List of Contributors * [Thomas Delteil](https://github.com/ThomasDelteil) * [Jesse Brizzi](https://github.com/jessebrizzi) * [Hang Zhang](http://hangzh.com) +* [Lin Yuan](https://github.com/apeforest) * [Kou Ding](https://github.com/chinakook) * [Istvan Fehervari](https://github.com/ifeherva) diff --git a/cpp-package/scripts/OpWrapperGenerator.py b/cpp-package/scripts/OpWrapperGenerator.py index 8facde16840..1b5f8b56b92 100644 --- a/cpp-package/scripts/OpWrapperGenerator.py +++ b/cpp-package/scripts/OpWrapperGenerator.py @@ -95,6 +95,7 @@ class Arg: 'int or None':'dmlc::optional',\ 'long':'int64_t',\ 'double':'double',\ +'double or None':'dmlc::optional',\ 'Shape or None':'dmlc::optional',\ 'string':'const std::string&'} name = '' diff --git a/src/operator/contrib/ctc_loss-inl.h b/src/operator/contrib/ctc_loss-inl.h index ef58c519aa9..0e7b63e58fb 100644 --- a/src/operator/contrib/ctc_loss-inl.h +++ b/src/operator/contrib/ctc_loss-inl.h @@ -409,7 +409,7 @@ class CTCLossOp : public Operator { // since the input is activation before softmax and cudnn ctc takes softmax // apply softmax to inputs first. -mxnet_op::Softmax(s, data.dptr_, prob.dptr_, data.shape_, 2); +mxnet_op::Softmax(s, data.dptr_, prob.dptr_, data.shape_, 2, 1.0); CUDNN_CALL(cudnnCTCLoss(s->dnn_handle_, prob_desc_, @@ -427,7 +427,7 @@ class CTCLossOp : public Operator { if (req_grad) { mxnet_op::SoftmaxGrad(s, - prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2); + prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0); Assign(grad, mxnet::kWriteInplace, grad * alphabet_size); } } diff --git a/src/operator/nn/mkldnn/mkldnn_base-inl.h b/src/operator/nn/mkldnn/mkldnn_base-inl.h index f77d113dd1d..bbfb873ee86 100644 --- a/src/operator/nn/mkldnn/mkldnn_base-inl.h +++ b/src/operator/nn/mkldnn/mkldnn_base-inl.h @@ -146,9 +146,11 @@ namespace op { struct ActivationParam; struct ConvolutionParam; struct DeconvolutionParam; +struct SoftmaxParam; bool SupportMKLDNNAct(const ActivationParam& param); bool SupportMKLDNNConv(const ConvolutionParam& params, const NDArray &input); bool SupportMKLDNNDeconv(const DeconvolutionParam& params, const NDArray &input); +bool SupportMKLDNNSoftmax(const SoftmaxParam& param); } static int GetTypeSize(int dtype) { diff --git a/src/operator/nn/mkldnn/mkldnn_softmax.cc b/src/operator/nn/mkldnn/mkldnn_softmax.cc index acfa358a796..7268ed39339 100644 --- a/src/operator/nn/mkldnn/mkldnn_softmax.cc +++ b/src/operator/nn/mkldnn/mkldnn_softmax.cc @@ -32,6 +32,15 @@ namespace mxnet { namespace op { +bool SupportMKLDNNSoftmax(const SoftmaxParam ¶m) { + // MKLDNN does not support temperature argument in their softmax function + // now. Need update this once they start to support it. + if (param.temperature.has_value()) { +return false; + } + return true; +} + void MKLDNNSoftmaxForward(const nnvm::NodeAttrs& attrs, const OpContext &ctx, const NDArray &in_data, const OpReqType &req, const NDArray &out_data) { diff --git a/src/operator/nn/softmax-inl.h b/src/operator/nn/softmax-inl.h index 080bc08852c..64b436e7ea0 100644 --- a/src/operator/nn/softmax-inl.h +++ b/src/operator/nn/softmax-inl.h @@ -53,7 +53,7 @@ struct log_softmax_fwd { template inline void Softmax(Stream *s, DType *in, DType *out, -Shape shape, int axis) { +Shape shape, int axis, const DType temperature) { index_t M = shape[axis]; index_t N = shape.Size()/M; Shape stride = calc_stride(shape); @@ -71,12 +71,25 @@ inline void Softmax(Stream *s, DType *in, DType *out, } DType sum = DType(0); -for (index_t j = 0; j < M; ++j) { - sum += std::exp(in[base + j*sa] - mmax); -} +// By default temperature is 1.0, and only in reinforcement training +// users would set it to other values. +// Adding a branch here to save the CPU 'divide-by-1' computation at runtime +if (temperature == 1.0) { + for (index_t j = 0; j < M; ++j) { +sum += std::exp(in[base + j*sa] - mmax); + } + + for (index_t j = 0; j < M; ++j) { +out[base + j*sa] = OP::Map(in[base + j*sa] - mmax, sum); + } +} else { + for (index_t j = 0; j < M; ++j) { +sum +
[incubator-mxnet] branch master updated: [MXNET-560] Add temperature parameter in Softmax operator (#11466)
This is an automated email from the ASF dual-hosted git repository. nswamy pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new b1f2f44 [MXNET-560] Add temperature parameter in Softmax operator (#11466) b1f2f44 is described below commit b1f2f44118887debc380d43711257b7df099678a Author: Lin Yuan AuthorDate: Thu Jul 19 12:56:06 2018 -0700 [MXNET-560] Add temperature parameter in Softmax operator (#11466) * Add temperature parameter in softmax operator and add a unit test * Optimize runtime when temperature is set to default 1.0 * Add temperature parameter in softmax operator and add a unit test --- CONTRIBUTORS.md | 1 + cpp-package/scripts/OpWrapperGenerator.py | 1 + src/operator/contrib/ctc_loss-inl.h | 4 +- src/operator/nn/mkldnn/mkldnn_base-inl.h | 2 + src/operator/nn/mkldnn/mkldnn_softmax.cc | 9 src/operator/nn/softmax-inl.h | 82 ++- src/operator/nn/softmax.cc| 7 ++- tests/python/unittest/test_operator.py| 16 +- 8 files changed, 94 insertions(+), 28 deletions(-) diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md index c28214d..b04e4a3 100644 --- a/CONTRIBUTORS.md +++ b/CONTRIBUTORS.md @@ -172,6 +172,7 @@ List of Contributors * [Thomas Delteil](https://github.com/ThomasDelteil) * [Jesse Brizzi](https://github.com/jessebrizzi) * [Hang Zhang](http://hangzh.com) +* [Lin Yuan](https://github.com/apeforest) * [Kou Ding](https://github.com/chinakook) * [Istvan Fehervari](https://github.com/ifeherva) * [Aaron Markham](https://github.com/aaronmarkham) diff --git a/cpp-package/scripts/OpWrapperGenerator.py b/cpp-package/scripts/OpWrapperGenerator.py index 8facde1..1b5f8b5 100644 --- a/cpp-package/scripts/OpWrapperGenerator.py +++ b/cpp-package/scripts/OpWrapperGenerator.py @@ -95,6 +95,7 @@ class Arg: 'int or None':'dmlc::optional',\ 'long':'int64_t',\ 'double':'double',\ +'double or None':'dmlc::optional',\ 'Shape or None':'dmlc::optional',\ 'string':'const std::string&'} name = '' diff --git a/src/operator/contrib/ctc_loss-inl.h b/src/operator/contrib/ctc_loss-inl.h index ef58c51..0e7b63e 100644 --- a/src/operator/contrib/ctc_loss-inl.h +++ b/src/operator/contrib/ctc_loss-inl.h @@ -409,7 +409,7 @@ class CTCLossOp : public Operator { // since the input is activation before softmax and cudnn ctc takes softmax // apply softmax to inputs first. -mxnet_op::Softmax(s, data.dptr_, prob.dptr_, data.shape_, 2); +mxnet_op::Softmax(s, data.dptr_, prob.dptr_, data.shape_, 2, 1.0); CUDNN_CALL(cudnnCTCLoss(s->dnn_handle_, prob_desc_, @@ -427,7 +427,7 @@ class CTCLossOp : public Operator { if (req_grad) { mxnet_op::SoftmaxGrad(s, - prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2); + prob.dptr_, grad.dptr_, grad.dptr_, data.shape_, 2, 1.0); Assign(grad, mxnet::kWriteInplace, grad * alphabet_size); } } diff --git a/src/operator/nn/mkldnn/mkldnn_base-inl.h b/src/operator/nn/mkldnn/mkldnn_base-inl.h index f77d113..bbfb873 100644 --- a/src/operator/nn/mkldnn/mkldnn_base-inl.h +++ b/src/operator/nn/mkldnn/mkldnn_base-inl.h @@ -146,9 +146,11 @@ namespace op { struct ActivationParam; struct ConvolutionParam; struct DeconvolutionParam; +struct SoftmaxParam; bool SupportMKLDNNAct(const ActivationParam& param); bool SupportMKLDNNConv(const ConvolutionParam& params, const NDArray &input); bool SupportMKLDNNDeconv(const DeconvolutionParam& params, const NDArray &input); +bool SupportMKLDNNSoftmax(const SoftmaxParam& param); } static int GetTypeSize(int dtype) { diff --git a/src/operator/nn/mkldnn/mkldnn_softmax.cc b/src/operator/nn/mkldnn/mkldnn_softmax.cc index acfa358..7268ed3 100644 --- a/src/operator/nn/mkldnn/mkldnn_softmax.cc +++ b/src/operator/nn/mkldnn/mkldnn_softmax.cc @@ -32,6 +32,15 @@ namespace mxnet { namespace op { +bool SupportMKLDNNSoftmax(const SoftmaxParam ¶m) { + // MKLDNN does not support temperature argument in their softmax function + // now. Need update this once they start to support it. + if (param.temperature.has_value()) { +return false; + } + return true; +} + void MKLDNNSoftmaxForward(const nnvm::NodeAttrs& attrs, const OpContext &ctx, const NDArray &in_data, const OpReqType &req, const NDArray &out_data) { diff --git a/src/operator/nn/softmax-inl.h b/src/operator/nn/softmax-inl.h index 080bc08..64b436e 100644 --- a/src/operator/nn/softmax-inl.h +++ b/src/operator/nn/softmax-inl.h @@ -53,7 +53,7 @@ struct log_softmax_fwd { template inline void Softmax(Stream *s, DType *in, DType *out, -Shape shape, int axis) { +Shape shape, int axis, const D
[GitHub] eric-haibin-lin commented on issue #11827: fix for bug #10868: _backward_softsign activation is incorrect
eric-haibin-lin commented on issue #11827: fix for bug #10868: _backward_softsign activation is incorrect URL: https://github.com/apache/incubator-mxnet/pull/11827#issuecomment-406398306 Good first blood! @samskalicky This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11016: [Feature request] temperature parameter in Softmax and SoftmaxOutput
apeforest commented on issue #11016: [Feature request] temperature parameter in Softmax and SoftmaxOutput URL: https://github.com/apache/incubator-mxnet/issues/11016#issuecomment-406398284 @slitsey This feature has been merged. Please pull the latest code and verify. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy edited a comment on issue #11763: When Train SSD, It hold on during read the data
nswamy edited a comment on issue #11763: When Train SSD, It hold on during read the data URL: https://github.com/apache/incubator-mxnet/issues/11763#issuecomment-405764912 @burness where is the train_ssd.py coming from? could you provide a complete code to test with? What do you mean by your own VOCDetection dataset, did you modify the dataset for your use-case? @vishaalkapoor could you please review this? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11285: Crash while running gluon image-classification.py example with float16
apeforest commented on issue #11285: Crash while running gluon image-classification.py example with float16 URL: https://github.com/apache/incubator-mxnet/issues/11285#issuecomment-406402041 Confirmed the fix using one GPU. @msharmavikram Please let me know if you still see the issue. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11284: Issues with Gluon example/gluon/image_classification.py example
apeforest commented on issue #11284: Issues with Gluon example/gluon/image_classification.py example URL: https://github.com/apache/incubator-mxnet/issues/11284#issuecomment-406402386 Verified the fix using one GPU. @ZaidQureshi Please test using the latest release and confirm. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lanking520 opened a new pull request #11828: [MXNET-531] Update links for MXNet Scala Spark Example
lanking520 opened a new pull request #11828: [MXNET-531] Update links for MXNet Scala Spark Example URL: https://github.com/apache/incubator-mxnet/pull/11828 ## Description ## This is a fix regarding to [this issue](https://github.com/apache/incubator-mxnet/issues/11060) @andrewfayres @nswamy @yzhliu ## Checklist ## ### Essentials ### Please feel free to remove inapplicable items for your PR. - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) created (except PRs with tiny changes) - [ ] Changes are complete (i.e. I finished coding on this PR) - [ ] All changes have test coverage: - Unit tests are added for small changes to verify correctness (e.g. adding a new operator) - Nightly tests are added for complicated/long-running ones (e.g. changing distributed kvstore) - Build tests will be added for build configuration changes (e.g. adding a new build option with NCCL) - [ ] Code is well-documented: - For user-facing API changes, API doc string has been updated. - For new C++ functions in header files, their functionalities and arguments are documented. - For new examples, README.md is added to explain the what the example does, the source of the dataset, expected performance on test set and reference to the original paper if applicable - Check the API doc at http://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html - [ ] To the my best knowledge, examples are either not affected by this change, or have been fixed to be compatible with this change This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] eric-haibin-lin closed pull request #11643: Added the diag() operator
eric-haibin-lin closed pull request #11643: Added the diag() operator URL: https://github.com/apache/incubator-mxnet/pull/11643 This is a PR merged from a forked repository. As GitHub hides the original diff on merge, it is displayed below for the sake of provenance: As this is a foreign pull request (from a fork), the diff is supplied below (as it won't show otherwise due to GitHub magic): diff --git a/docs/api/python/ndarray/ndarray.md b/docs/api/python/ndarray/ndarray.md index dda534151a1..849412021e1 100644 --- a/docs/api/python/ndarray/ndarray.md +++ b/docs/api/python/ndarray/ndarray.md @@ -131,6 +131,7 @@ The `ndarray` package provides several classes: NDArray.flatten NDArray.expand_dims NDArray.split +NDArray.diag ``` ### Array expand elements @@ -364,6 +365,7 @@ The `ndarray` package provides several classes: ones_like full arange +diag load save ``` diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md index 304b17803ed..a59a92745c7 100644 --- a/docs/api/python/symbol/symbol.md +++ b/docs/api/python/symbol/symbol.md @@ -182,6 +182,7 @@ Composite multiple symbols into a new one by an operator. Symbol.zeros_like Symbol.ones_like +Symbol.diag ``` ### Changing shape and type @@ -381,6 +382,7 @@ Composite multiple symbols into a new one by an operator. reshape_like flatten expand_dims +diag ``` ### Expanding elements diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py index 09395e2ec82..ff9aac05c7c 100644 --- a/python/mxnet/ndarray/ndarray.py +++ b/python/mxnet/ndarray/ndarray.py @@ -1302,6 +1302,14 @@ def flip(self, *args, **kwargs): """ return op.flip(self, *args, **kwargs) +def diag(self, k=0, **kwargs): +"""Convenience fluent method for :py:func:`diag`. + +The arguments are the same as for :py:func:`diag`, with +this array as data. +""" +return op.diag(self, k, **kwargs) + def sum(self, *args, **kwargs): """Convenience fluent method for :py:func:`sum`. diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py index b041f4ef646..88f92cde0fe 100644 --- a/python/mxnet/symbol/symbol.py +++ b/python/mxnet/symbol/symbol.py @@ -2038,6 +2038,14 @@ def flip(self, *args, **kwargs): """ return op.flip(self, *args, **kwargs) +def diag(self, k=0, **kwargs): +"""Convenience fluent method for :py:func:`diag`. + +The arguments are the same as for :py:func:`diag`, with +this array as data. +""" +return op.diag(self, k, **kwargs) + def sum(self, *args, **kwargs): """Convenience fluent method for :py:func:`sum`. diff --git a/src/operator/tensor/diag_op-inl.h b/src/operator/tensor/diag_op-inl.h new file mode 100644 index 000..3bc240f206b --- /dev/null +++ b/src/operator/tensor/diag_op-inl.h @@ -0,0 +1,217 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NOTICE file + * distributed with this work for additional information + * regarding copyright ownership. The ASF licenses this file + * to you under the Apache License, Version 2.0 (the + * "License"); you may not use this file except in compliance + * with the License. You may obtain a copy of the License at + * + * http://www.apache.org/licenses/LICENSE-2.0 + * + * Unless required by applicable law or agreed to in writing, + * software distributed under the License is distributed on an + * "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY + * KIND, either express or implied. See the License for the + * specific language governing permissions and limitations + * under the License. + */ + +/*! +* Copyright (c) 2015 by Contributors +* \file diag_op-inl.h +* \brief CPU Implementation of the diag op +* \author Istvan Fehervari +*/ + +#ifndef MXNET_OPERATOR_TENSOR_DIAG_OP_INL_H_ +#define MXNET_OPERATOR_TENSOR_DIAG_OP_INL_H_ + +#include +#include +#include +#include "../mxnet_op.h" +#include "../operator_common.h" +#include "../elemwise_op_common.h" + +namespace mxnet { +namespace op { + +struct DiagParam : public dmlc::Parameter { + dmlc::optional k; + DMLC_DECLARE_PARAMETER(DiagParam) { +DMLC_DECLARE_FIELD(k) +.set_default(dmlc::optional(0)) +.describe("Diagonal in question. The default is 0. " + "Use k>0 for diagonals above the main diagonal, " + "and k<0 for diagonals below the main diagonal. " + "If input has shape (S0 S1) k must be between -S0 and S1"); + } +}; + +inline TShape DiagShapeImpl(const TShape& ishape, const nnvm::dim_t k) { + if (ishape.ndim() == 1) { +auto s = ishape[0] + std::abs(k); +return TShape({s, s}); + } + + auto h = ishape[0]; + auto w = ishape[1]; + + if (k > 0) { +w -= k; + } else if (k < 0) { +h += k; + } + + auto s = std::min(h, w); + if
[incubator-mxnet] branch master updated: Added the diag() operator (#11643)
This is an automated email from the ASF dual-hosted git repository. haibin pushed a commit to branch master in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git The following commit(s) were added to refs/heads/master by this push: new f15b1b8 Added the diag() operator (#11643) f15b1b8 is described below commit f15b1b88b9f055420ba19bb73e93b229bf03febd Author: Istvan Fehervari AuthorDate: Thu Jul 19 13:26:46 2018 -0700 Added the diag() operator (#11643) * Added np.diag as mxnet operator, WIP Done: 2d input forward pass Missing: 1d input forward all backward * Added a simple gradient transfer backwards operator for diag Fixed small typos as well * Finished backward operation * Added full support for k * Finished added the 1D case to the diag operator Finished function documentation Added unit tests * Fixed cpplinter errors in the diag operator Issues were extra white spaces and include order * Fixed indentation in diag_op-inl.h * Changed diag operator tests to use np.diag() as comparison * Fixed kernel bug in gpu diag operator * Replaced the min operator with an inline if statement. * Added diag to ndarray and symbol * Replaced the type of parameter k from int32 to nnvm::dim * Added default argument to k in ndarray and symbol * Fixed ndarray and symbol diag calls * Fixed the optional k parameter * Fixed cpp linting error * Changed test data datatype to float32 * K values resulting into 0-sized diagonals will now throw an exception. Added matching test case * Fixed unittest * Added diag to NDArray and Symbol api doc * Added missing api doc --- docs/api/python/ndarray/ndarray.md | 2 + docs/api/python/symbol/symbol.md | 2 + python/mxnet/ndarray/ndarray.py| 8 ++ python/mxnet/symbol/symbol.py | 8 ++ src/operator/tensor/diag_op-inl.h | 217 + src/operator/tensor/diag_op.cc | 93 ++ src/operator/tensor/diag_op.cu | 39 ++ tests/python/unittest/test_operator.py | 75 +++- 8 files changed, 443 insertions(+), 1 deletion(-) diff --git a/docs/api/python/ndarray/ndarray.md b/docs/api/python/ndarray/ndarray.md index dda5341..8494120 100644 --- a/docs/api/python/ndarray/ndarray.md +++ b/docs/api/python/ndarray/ndarray.md @@ -131,6 +131,7 @@ The `ndarray` package provides several classes: NDArray.flatten NDArray.expand_dims NDArray.split +NDArray.diag ``` ### Array expand elements @@ -364,6 +365,7 @@ The `ndarray` package provides several classes: ones_like full arange +diag load save ``` diff --git a/docs/api/python/symbol/symbol.md b/docs/api/python/symbol/symbol.md index 304b178..a59a927 100644 --- a/docs/api/python/symbol/symbol.md +++ b/docs/api/python/symbol/symbol.md @@ -182,6 +182,7 @@ Composite multiple symbols into a new one by an operator. Symbol.zeros_like Symbol.ones_like +Symbol.diag ``` ### Changing shape and type @@ -381,6 +382,7 @@ Composite multiple symbols into a new one by an operator. reshape_like flatten expand_dims +diag ``` ### Expanding elements diff --git a/python/mxnet/ndarray/ndarray.py b/python/mxnet/ndarray/ndarray.py index 09395e2..ff9aac0 100644 --- a/python/mxnet/ndarray/ndarray.py +++ b/python/mxnet/ndarray/ndarray.py @@ -1302,6 +1302,14 @@ fixed-size items. """ return op.flip(self, *args, **kwargs) +def diag(self, k=0, **kwargs): +"""Convenience fluent method for :py:func:`diag`. + +The arguments are the same as for :py:func:`diag`, with +this array as data. +""" +return op.diag(self, k, **kwargs) + def sum(self, *args, **kwargs): """Convenience fluent method for :py:func:`sum`. diff --git a/python/mxnet/symbol/symbol.py b/python/mxnet/symbol/symbol.py index b041f4e..88f92cd 100644 --- a/python/mxnet/symbol/symbol.py +++ b/python/mxnet/symbol/symbol.py @@ -2038,6 +2038,14 @@ class Symbol(SymbolBase): """ return op.flip(self, *args, **kwargs) +def diag(self, k=0, **kwargs): +"""Convenience fluent method for :py:func:`diag`. + +The arguments are the same as for :py:func:`diag`, with +this array as data. +""" +return op.diag(self, k, **kwargs) + def sum(self, *args, **kwargs): """Convenience fluent method for :py:func:`sum`. diff --git a/src/operator/tensor/diag_op-inl.h b/src/operator/tensor/diag_op-inl.h new file mode 100644 index 000..3bc240f --- /dev/null +++ b/src/operator/tensor/diag_op-inl.h @@ -0,0 +1,217 @@ +/* + * Licensed to the Apache Software Foundation (ASF) under one + * or more contributor license agreements. See the NO
[GitHub] apeforest commented on issue #8997: gluon hang-up unexpectedly
apeforest commented on issue #8997: gluon hang-up unexpectedly URL: https://github.com/apache/incubator-mxnet/issues/8997#issuecomment-406405522 @shuokay I can no longer reproduce this issue. Please help to verify. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #9167: Gluon raises error if the user does not call nd.waitall()
apeforest commented on issue #9167: Gluon raises error if the user does not call nd.waitall() URL: https://github.com/apache/incubator-mxnet/issues/9167#issuecomment-406405643 @sxjscience I can no longer reproduce this issue. Please help to verify. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] lanking520 commented on issue #11681: [MXNET-531] improvement of adding source.jar to assembly
lanking520 commented on issue #11681: [MXNET-531] improvement of adding source.jar to assembly URL: https://github.com/apache/incubator-mxnet/pull/11681#issuecomment-406409628 It finally passed the Clojure test as expected! Thanks guys! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.
This is an automated email from the ASF dual-hosted git repository. zhasheng pushed a commit to branch asf-site in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git The following commit(s) were added to refs/heads/asf-site by this push: new 2f34afa Bump the publish timestamp. 2f34afa is described below commit 2f34afa2f8edcaa24e2ac6e240961d55164cf838 Author: mxnet-ci AuthorDate: Thu Jul 19 20:54:19 2018 + Bump the publish timestamp. --- date.txt | 1 + 1 file changed, 1 insertion(+) diff --git a/date.txt b/date.txt new file mode 100644 index 000..2ff944b --- /dev/null +++ b/date.txt @@ -0,0 +1 @@ +Thu Jul 19 20:54:19 UTC 2018
[GitHub] haojin2 commented on issue #8383: test_arange failure - Jetson TX2 (CPU)
haojin2 commented on issue #8383: test_arange failure - Jetson TX2 (CPU) URL: https://github.com/apache/incubator-mxnet/issues/8383#issuecomment-406411884 @KellenSunderland Since numpy has a defined behavior for such inputs I think the test is valid. I also tried this on ubuntu and mac, and both produced same results as numpy. Maybe this is a platform-specific thing? Seems like here: https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/init_op.h#L449-L452 already has some code to address the signed-unsigned issue. Seems like there's some problem with the step conversion as the each output entry from MXNet arange has the same value 50 which is the start value. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 edited a comment on issue #8383: test_arange failure - Jetson TX2 (CPU)
haojin2 edited a comment on issue #8383: test_arange failure - Jetson TX2 (CPU) URL: https://github.com/apache/incubator-mxnet/issues/8383#issuecomment-406411884 @KellenSunderland Since numpy has a defined behavior for such inputs I think the test is valid. I also tried this on ubuntu and mac, and both produced same results as numpy. Maybe this is a platform-specific thing? Seems like here: https://github.com/apache/incubator-mxnet/blob/master/src/operator/tensor/init_op.h#L449-L452 already has some code to address the signed-unsigned issue. From the outputs I think maybe there's some problem with the step conversion as the each output entry from MXNet arange has the same value 50 which is the start value. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] zheng-da commented on a change in pull request #11608: [MXNET-489] MKLDNN Pool test
zheng-da commented on a change in pull request #11608: [MXNET-489] MKLDNN Pool test URL: https://github.com/apache/incubator-mxnet/pull/11608#discussion_r203869745 ## File path: src/operator/nn/mkldnn/mkldnn_pooling.cc ## @@ -86,15 +86,15 @@ void MKLDNNPoolingFwd::Init(const mxnet::NDArray &input, const mxnet::NDArray &o return; } -void MKLDNNPoolingFwd::SetDataHandle(const mxnet::NDArray &data, - const mxnet::NDArray &output, - const mxnet::NDArray *workspace) { +void MKLDNNPoolingFwd::SetNewMem(const NDArray in_data, + const NDArray out_data, Review comment: can you use reference for input NDArrays? This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] KellenSunderland commented on issue #8383: test_arange failure - Jetson TX2 (CPU)
KellenSunderland commented on issue #8383: test_arange failure - Jetson TX2 (CPU) URL: https://github.com/apache/incubator-mxnet/issues/8383#issuecomment-406413964 Yeah sorry I thought this issue was closed. I believe it's been fixed. Give me some time and I'll verify. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #10262: bug in word language model example
apeforest commented on issue #10262: bug in word language model example URL: https://github.com/apache/incubator-mxnet/issues/10262#issuecomment-406415478 @szha What is the issue here? I don't see misalignment in the arguments now. Please clarify. Thanks! This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] apeforest commented on issue #11331: gluon bug: AttributeError: '_thread._local' object has no attribute 'value'
apeforest commented on issue #11331: gluon bug: AttributeError: '_thread._local' object has no attribute 'value' URL: https://github.com/apache/incubator-mxnet/issues/11331#issuecomment-406417216 @sandeep-krishnamurthy This issue has been resolved. Please close it. Thanks This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] haojin2 commented on issue #8383: test_arange failure - Jetson TX2 (CPU)
haojin2 commented on issue #8383: test_arange failure - Jetson TX2 (CPU) URL: https://github.com/apache/incubator-mxnet/issues/8383#issuecomment-406417239 @KellenSunderland Thanks for your quick response! Please close the issue if the results on your end shows that this has been solved. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services
[GitHub] nswamy commented on issue #11594: [FeatureRequest]MXNet Split operator to support variable length splits
nswamy commented on issue #11594: [FeatureRequest]MXNet Split operator to support variable length splits URL: https://github.com/apache/incubator-mxnet/issues/11594#issuecomment-406418390 Closing as the PR fixes. This is an automated message from the Apache Git Service. To respond to the message, please log on GitHub and use the URL above to go to the specific comment. For queries about this service, please contact Infrastructure at: us...@infra.apache.org With regards, Apache Git Services