leezu closed issue #12795: Deserialization problem with gluon `ValueError:
There are multiple outputs with name ...`
URL: https://github.com/apache/incubator-mxnet/issues/12795
This is an automated message from the Apache Gi
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new a31f746 Bump the publis
xidulu opened a new pull request #16811: [Numpy] Add gammaln, erf, erfinv to
npx namespace
URL: https://github.com/apache/incubator-mxnet/pull/16811
## Description ##
As title
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items for your PR.
-
leezu commented on issue #16760: [Estimator] Improve usability and fix logging
inconsistencies
URL: https://github.com/apache/incubator-mxnet/pull/16760#issuecomment-553746512
Superseded by https://github.com/apache/incubator-mxnet/pull/16810
---
leezu opened a new pull request #16810: [Gluon] Improve estimator usability and
fix logging logic
URL: https://github.com/apache/incubator-mxnet/pull/16810
## Description ##
gluon.contrib.estimator used a global Logger obtained via
`logging.getLogger('gluon.contrib.estimator.event_han
leezu closed pull request #16760: [Estimator] Improve usability and fix logging
inconsistencies
URL: https://github.com/apache/incubator-mxnet/pull/16760
This is an automated message from the Apache Git Service.
To respond t
TsingWei opened a new issue #16809: When build mxnet from source via cmake, if
I set `USE_SSE` to `0`, target `mshadow` is still built with SSE option on.
URL: https://github.com/apache/incubator-mxnet/issues/16809
## Description
When build mxnet from source via cmake, if I set `USE_SS
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346127691
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,205 @@ void NumpyConcatenateBackward(const nnvm::N
haojin2 commented on issue #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#issuecomment-553725065
Why r u also updating dmlc-core?
This is an automated message from the Apache Git Service.
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346127082
##
File path: src/operator/numpy/np_matrix_op-inl.h
##
@@ -945,6 +950,205 @@ void NumpyConcatenateBackward(const nnvm::N
hzfan commented on a change in pull request #16800: [WIP][DO NOT MERGE][Numpy]
np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r346114041
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -3222,6 +3222,94 @@
hzfan commented on a change in pull request #16800: [WIP][DO NOT MERGE][Numpy]
np.linalg.det and np.linalg.slogdet
URL: https://github.com/apache/incubator-mxnet/pull/16800#discussion_r346114041
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -3222,6 +3222,94 @@
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346107803
##
File path: src/operator/optimizer_op.cc
##
@@ -921,5 +923,39 @@ Note that non-zero values for the weight decay
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346107263
##
File path: src/operator/optimizer_op-inl.h
##
@@ -1563,6 +1563,192 @@ inline void AdamUpdateEx(const nnvm::Nod
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346107327
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
co
TaoLv commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-553700038
Yes, static linking is not blocked - in fact it was merged just now. I'm
pinging just because #16805 is also asking for that. :)
---
cjolivier01 commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-553697814
Busy at my day job haven’t had time to mess with it yet. I don’t think
static linking is blocked by this since the behavior would p
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346100874
##
File path: src/operator/optimizer_op-inl.h
##
@@ -1563,6 +1563,192 @@ inline void AdamUpdateEx(const nnvm:
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346100097
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,79 @@ def test_nag():
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346100215
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346099628
##
File path: src/operator/optimizer_op.cc
##
@@ -921,5 +923,39 @@ Note that non-zero values for the weight d
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346100874
##
File path: src/operator/optimizer_op-inl.h
##
@@ -1563,6 +1563,192 @@ inline void AdamUpdateEx(const nnvm:
haojin2 commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346100571
##
File path: src/operator/numpy/np_polyval_op.cc
##
@@ -0,0 +1,129 @@
+/*
+ * Licen
haojin2 commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346100387
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
'aro
haojin2 commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346099828
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
'aro
haojin2 commented on a change in pull request #16788: [Numpy][TVM]Add numpy
operator 'polyval' based on tvm
URL: https://github.com/apache/incubator-mxnet/pull/16788#discussion_r346099563
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -5308,3 +5308,60 @@ def nan_to_
haojin2 commented on a change in pull request #16801: add op bitwise_or [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16801#discussion_r346099232
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
'around', 'hypot', 'rad2deg', 'de
pengzhao-intel commented on issue #16772: [MKLDNN] Use MKLDNNRun
URL: https://github.com/apache/incubator-mxnet/pull/16772#issuecomment-553691335
@TaoLv @ciyongch any comments?
This is an automated message from the Apache Git
pengzhao-intel commented on issue #16772: [MKLDNN] Use MKLDNNRun
URL: https://github.com/apache/incubator-mxnet/pull/16772#issuecomment-553691271
LGTM but let's wait for the performance testing reports.
This is an automated me
TaoLv commented on issue #11417: libomp.so dependency (need REAL fix)
URL:
https://github.com/apache/incubator-mxnet/issues/11417#issuecomment-553690014
Hi @cjolivier01, may I have your update?
This is an automated message fr
haojin2 edited a comment on issue #9845: Flaky test_operator.test_reduce
URL:
https://github.com/apache/incubator-mxnet/issues/9845#issuecomment-553689576
Occured again:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-16786/4/pipeline/
marcoabreu opened a new issue #9845: Flaky test_operator.test_reduce
URL: https://github.com/apache/incubator-mxnet/issues/9845
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/incubator-mxnet/detail/PR-9841/1/pipeline
(test unrelated to change)
```
==
haojin2 commented on issue #9845: Flaky test_operator.test_reduce
URL:
https://github.com/apache/incubator-mxnet/issues/9845#issuecomment-553689576
Occured again:
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-cpu/detail/PR-16786/4/pipeline/294/
MoonBunnyZZZ opened a new issue #16808: About ImageRecordIter source code
URL: https://github.com/apache/incubator-mxnet/issues/16808
## Description
Where is the source code of ImageRecordIter python API?I want to learn the
detail of it.
It seems like that ImageRecordIter is a native
zhreshold commented on issue #16708: Training an FPN model using grad_req="add"
causes rapid divergence, while manually implemented gradient accumulation
works fine
URL:
https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-553685410
As long as `ElementWiseSum` more than 4
TaoLv commented on issue #16471: CMake build with MKL_USE_ILP64 throws type
mismatch
URL:
https://github.com/apache/incubator-mxnet/issues/16471#issuecomment-553685254
Hi @matteosal I'm not sure I understand your question. Since you're trying
to use ILP64, I guess your tensor size is > IN
This is an automated email from the ASF dual-hosted git repository.
patriczhao pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 42d3182 migrate cudaMemcpy to cudaMemcpyAsync+cudaStreamSynchronize
(#16790)
add 017f6fa Static
pengzhao-intel merged pull request #16731: Static link MKL-DNN library
URL: https://github.com/apache/incubator-mxnet/pull/16731
This is an automated message from the Apache Git Service.
To respond to the message, please log
pengzhao-intel commented on issue #16731: Static link MKL-DNN library
URL: https://github.com/apache/incubator-mxnet/pull/16731#issuecomment-553682828
Merging now
This is an automated message from the Apache Git Service.
To re
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346088876
##
File path: src/operator/optimizer_op.cc
##
@@ -921,5 +923,33 @@ Note that non-zero values for the weight decay
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346088727
##
File path: python/mxnet/optimizer/optimizer.py
##
@@ -1244,6 +1244,51 @@ def update(self, index, weight, grad,
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346088325
##
File path: src/operator/optimizer_op.cc
##
@@ -921,5 +923,33 @@ Note that non-zero values for the weight decay
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346088464
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
co
access2rohit commented on a change in pull request #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346088445
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
co
zhreshold commented on issue #16708: Training an FPN model using grad_req="add"
causes rapid divergence, while manually implemented gradient accumulation
works fine
URL:
https://github.com/apache/incubator-mxnet/issues/16708#issuecomment-553677925
Interestingly disable `use_p6` in FPN ca
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 07eaf39 Bump the publis
anirudh2290 opened a new pull request #16807: [WIP] Multithreaded Inference
Support
URL: https://github.com/apache/incubator-mxnet/pull/16807
## Description ##
Trying to run CI against 1.6
## Checklist ##
### Essentials ###
Please feel free to remove inapplicable items f
anirudh2290 closed pull request #16756: [WIP] Multithreaded inference backend
support
URL: https://github.com/apache/incubator-mxnet/pull/16756
This is an automated message from the Apache Git Service.
To respond to the mess
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346058775
##
File path: tests/python/unittest/test_numpy_op.py
##
@@ -4403,6 +4403,68 @@ def hybrid_forward(self, F, x, *args, **k
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346058084
##
File path: src/operator/numpy/np_diag_op.cc
##
@@ -0,0 +1,55 @@
+/*
+ * Licensed to the Apache Software Foundation (A
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346057670
##
File path: src/operator/numpy/np_diag_op-inl.h
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundatio
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346057520
##
File path: src/operator/numpy/np_diag_op-inl.h
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundatio
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346057267
##
File path: src/operator/numpy/np_diag_op-inl.h
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundatio
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346055710
##
File path: src/operator/numpy/np_diag_op-inl.h
##
@@ -0,0 +1,237 @@
+/*
+ * Licensed to the Apache Software Foundatio
haojin2 commented on a change in pull request #16786: Add OP diag [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16786#discussion_r346055243
##
File path: src/operator/numpy/np_diag_op-inl.h
##
@@ -0,0 +1,237 @@
+/*
Review comment:
put this op in `np_matr
haojin2 commented on a change in pull request #16804: add numpy op full_like,
c++ impl, fix zeros_like, ones_like type infe…
URL: https://github.com/apache/incubator-mxnet/pull/16804#discussion_r346052213
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
haojin2 commented on a change in pull request #16804: add numpy op full_like,
c++ impl, fix zeros_like, ones_like type infe…
URL: https://github.com/apache/incubator-mxnet/pull/16804#discussion_r346050873
##
File path: python/mxnet/ndarray/numpy/_op.py
##
@@ -39,7 +39,7 @@
haojin2 closed issue #16583: Many cudaMemCopy in operators
URL: https://github.com/apache/incubator-mxnet/issues/16583
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitH
haojin2 commented on issue #16583: Many cudaMemCopy in operators
URL:
https://github.com/apache/incubator-mxnet/issues/16583#issuecomment-553645077
Solved in #16790, closing.
This is an automated message from the Apache Git S
This is an automated email from the ASF dual-hosted git repository.
haoj pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from e88e97f [Numpy] Fix collect_params().zero_grad() in gluon numpy
interface (#16716)
add 42d3182 migrate
haojin2 merged pull request #16790: migrate cudaMemcpy to
cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790
This is an automated message from the Apache Git Service.
To respond t
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346028602
##
File path: src/operator/optimizer_op-inl.h
##
@@ -1563,6 +1563,186 @@ inline void AdamUpdateEx(const nnvm:
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346027278
##
File path: python/mxnet/optimizer/optimizer.py
##
@@ -1244,6 +1244,51 @@ def update(self, index, weight, g
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346027919
##
File path: src/operator/optimizer_op.cc
##
@@ -921,5 +923,33 @@ Note that non-zero values for the weight d
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346033393
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346026950
##
File path: python/mxnet/optimizer/optimizer.py
##
@@ -1244,6 +1244,51 @@ def update(self, index, weight, g
eric-haibin-lin commented on a change in pull request #16715: Lamb optimizer
update
URL: https://github.com/apache/incubator-mxnet/pull/16715#discussion_r346030248
##
File path: tests/python/unittest/test_optimizer.py
##
@@ -425,6 +425,77 @@ def test_nag():
zhreshold commented on issue #16806: Segfault in SetDepedency
URL:
https://github.com/apache/incubator-mxnet/issues/16806#issuecomment-553609730
Can you minimize the reproduction code? For example, would a simple snippet
like this reproduce segfault?
```python
with autograd.recor
reminisce closed issue #16776: [Flaky] mx.np.linalg.inv
URL: https://github.com/apache/incubator-mxnet/issues/16776
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
access2rohit edited a comment on issue #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#issuecomment-553578991
@mxnet-label-bot add [pr-awaiting-review]
This is an automated message from
access2rohit commented on issue #16715: Lamb optimizer update
URL: https://github.com/apache/incubator-mxnet/pull/16715#issuecomment-553578991
@mxnet-label-bot add [pr-ready-to-review]
This is an automated message from the Apa
anirudh2290 commented on issue #16797: Fix SliceChannel Type inference (#16748)
URL: https://github.com/apache/incubator-mxnet/pull/16797#issuecomment-553559262
@ptrendx @samskalicky
This is an automated message from the Apac
This is an automated email from the ASF dual-hosted git repository.
sxjscience pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 9b25db0 Fix numpy-compatible mean output type for integer inputs
(#16792)
add e88e97f [Numpy] Fi
sxjscience merged pull request #16716: [Numpy] Fix collect_params().zero_grad()
in gluon numpy interface
URL: https://github.com/apache/incubator-mxnet/pull/16716
This is an automated message from the Apache Git Service.
To
This is an automated email from the ASF dual-hosted git repository.
reminisce pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 7d4f2f3 [MXNET-1421] Added (CuDNN)BatchNorm operator to the list of
mirrored operators (#16022)
ad
reminisce merged pull request #16792: Fix numpy-compatible mean output type for
integer inputs
URL: https://github.com/apache/incubator-mxnet/pull/16792
This is an automated message from the Apache Git Service.
To respond to
haojin2 commented on a change in pull request #16801: add op bitwise_or [numpy]
URL: https://github.com/apache/incubator-mxnet/pull/16801#discussion_r345940563
##
File path: python/mxnet/numpy/multiarray.py
##
@@ -2749,6 +2749,55 @@ def lcm(x1, x2, out=None, **kwargs):
Kh4L opened a new issue #16806: Segfault in SetDepedency
URL: https://github.com/apache/incubator-mxnet/issues/16806
## Description
GluonCV Mask-RCNN training script segfaults in SetDepency when using
`mrcnn_mask_target` op in symbolic mode (with the hybridized model).
`master`s
stu1130 commented on issue #15383: [numpy] np.random.multinomial is different
from _np
URL:
https://github.com/apache/incubator-mxnet/issues/15383#issuecomment-553545020
@TomasBahnik I was able to reproduce the issue with mxnet 1.6.0b20190926 on
windows. The pip wheel doesn't have COMMIT_
This is an automated email from the ASF dual-hosted git repository.
aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git
The following commit(s) were added to refs/heads/asf-site by this push:
new 0b9900e Bump the publis
matteosal commented on issue #16805: About OpenMP dependencies
URL:
https://github.com/apache/incubator-mxnet/issues/16805#issuecomment-553530253
Ah ok thanks for the explanation, and sorry for not having checked existing
issues
matteosal closed issue #16805: About OpenMP dependencies
URL: https://github.com/apache/incubator-mxnet/issues/16805
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub
ptrendx commented on a change in pull request #16790: migrate cudaMemcpy to
cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345898676
##
File path: src/operator/contrib/multi_proposal.cu
##
@@ -529,50 +537,51 @@ c
ptrendx commented on a change in pull request #16790: migrate cudaMemcpy to
cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345897939
##
File path: src/kvstore/kvstore_utils.cu
##
@@ -82,16 +82,16 @@ size_t Unique
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 47fd3a0 Link fixes4 (#16764)
add 7d4f2f3 [MXNET-1421] Added (CuDNN)BatchNorm operator to the list of
This is an automated email from the ASF dual-hosted git repository.
haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.
from 47fd3a0 Link fixes4 (#16764)
add 7d4f2f3 [MXNET-1421] Added (CuDNN)BatchNorm operator to the list of
ptrendx commented on a change in pull request #16790: migrate cudaMemcpy to
cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345896857
##
File path: src/operator/contrib/proposal.cu
##
@@ -456,9 +459,10 @@ class Pr
ptrendx commented on a change in pull request #16790: migrate cudaMemcpy to
cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345896911
##
File path: src/operator/contrib/proposal.cu
##
@@ -552,8 +557,8 @@ class Pro
eric-haibin-lin merged pull request #16022: [MXNET-1421] Added (CuDNN)BatchNorm
operator to the list of mirrored operators
URL: https://github.com/apache/incubator-mxnet/pull/16022
This is an automated message from the Apach
ptrendx commented on issue #16798: Add unoptimized symbol to executor for
sharing
URL: https://github.com/apache/incubator-mxnet/pull/16798#issuecomment-553490528
The error in the Scala CPU test
(http://jenkins.mxnet-ci.amazon-ml.com/blue/rest/organizations/jenkins/pipelines/mxnet-validati
nickguletskii commented on a change in pull request #16790: migrate cudaMemcpy
to cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345849814
##
File path: src/operator/contrib/proposal.cu
##
@@ -552,8 +557,8 @@ cla
nickguletskii commented on a change in pull request #16790: migrate cudaMemcpy
to cudaMemcpyAsync+cudaStreamSynchronize
URL: https://github.com/apache/incubator-mxnet/pull/16790#discussion_r345848641
##
File path: src/operator/contrib/proposal.cu
##
@@ -456,9 +459,10 @@ cl
TaoLv commented on issue #16805: About OpenMP dependencies
URL:
https://github.com/apache/incubator-mxnet/issues/16805#issuecomment-553468565
libiomp5 has been removed from master if you’re not using MKL BLAS. The
problem of two omp runtimes in cmake build is discussed at the end of #11417
matteosal commented on issue #16471: CMake build with MKL_USE_ILP64 throws type
mismatch
URL:
https://github.com/apache/incubator-mxnet/issues/16471#issuecomment-553466578
> AFAIK, MXNet doesn't work with ILP64. Could you please elaborate what's
the problem you want to solve?
@TaoL
matteosal opened a new issue #16805: About OpenMP dependencies
URL: https://github.com/apache/incubator-mxnet/issues/16805
Cc'ing @dszeto2
On Linux, when mxnet is built using just make, the library shows a
dependency from `libiomp5`. When built through cmake, additional dependencies
TaoLv commented on issue #16749: Ask for advice about using my int8gemm
URL:
https://github.com/apache/incubator-mxnet/issues/16749#issuecomment-553406431
Looks like the error is not related to MXNet project and you already know
how to run the experiments. So I'm closing this issue. Feel
TaoLv closed issue #16749: Ask for advice about using my int8gemm
URL: https://github.com/apache/incubator-mxnet/issues/16749
This is an automated message from the Apache Git Service.
To respond to the message, please log on
Alicia1529 opened a new pull request #16804: add numpy op full_like, c++ impl,
fix zeros_like, ones_like type infe…
URL: https://github.com/apache/incubator-mxnet/pull/16804
## Description ##
1. add c++ implementation of numpy op full_like and please ignore the
previous pr implemented w
TaoLv commented on issue #15348: Is it possible to add environment variables
directly to compilation options when compiling Mxnet?
URL:
https://github.com/apache/incubator-mxnet/issues/15348#issuecomment-553405151
@xianyujie @vzhangmeng726 Try to force engine type to `NaiveEngine` here:
h
ymzx opened a new issue #16803: src/storage/./pooled_storage_manager.h:157:
cudaMalloc failed: out of memory
URL: https://github.com/apache/incubator-mxnet/issues/16803
## Description
it will get cudaMalloc failed: out of memory when i run pixellink model
which likes FCN structure.
1 - 100 of 115 matches
Mail list logo