[GitHub] [incubator-mxnet] reminisce commented on issue #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
reminisce commented on issue #16450: Add test pipeline for USE_TVM_OP=OFF on 
Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#issuecomment-542507363
 
 
   dist-kvstore tests GPU failure has shown up in quite a few PRs recently. 
Wonder what the root cause is.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16436: Add sum for boolean type when not built with TVM

2019-10-15 Thread GitBox
reminisce commented on issue #16436: Add sum for boolean type when not built 
with TVM
URL: https://github.com/apache/incubator-mxnet/pull/16436#issuecomment-542506826
 
 
   `dist-kvstore tests GPU` failure has shown up in quite a few PRs recently. 
Wonder what's the root cause.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TaoLv opened a new pull request #16501: [Doc] Use mirror link in the download page

2019-10-15 Thread GitBox
TaoLv opened a new pull request #16501: [Doc] Use mirror link in the download 
page
URL: https://github.com/apache/incubator-mxnet/pull/16501
 
 
   ## Description ##
   
   1. Use mirror link in the download page. According to 
http://www.apache.org/dev/release-publishing.html#distribution_dist.
   2. Add a link for the KEY file.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn commented on issue #16234: [MXNET-1426] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-10-15 Thread GitBox
wkcn commented on issue #16234: [MXNET-1426] Fix the wrong result of sum, mean, 
argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16234#issuecomment-542495435
 
 
   Hi @eric-haibin-lin , could you please help take a review?
   It is a bug, which outputs a wrong result or an inconsistent result with 
that of NumPy.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] wkcn edited a comment on issue #16234: [MXNET-1426] Fix the wrong result of sum, mean, argmin, argmax when inputs contain inf or nan

2019-10-15 Thread GitBox
wkcn edited a comment on issue #16234: [MXNET-1426] Fix the wrong result of 
sum, mean, argmin, argmax when inputs contain inf or nan
URL: https://github.com/apache/incubator-mxnet/pull/16234#issuecomment-542495435
 
 
   Hi @eric-haibin-lin , could you please help take a review?
   It is a bug, which outputs a wrong result or an inconsistent result with 
that of NumPy.
   
   Thank you so much!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] rongzha1 commented on issue #16468: [mkldnn-1.0] add mkldnn subgraph fc

2019-10-15 Thread GitBox
rongzha1 commented on issue #16468: [mkldnn-1.0] add mkldnn subgraph fc
URL: https://github.com/apache/incubator-mxnet/pull/16468#issuecomment-542491120
 
 
   I didn't change lint failed file in sanity test and it passed in my local 
branch. So please merge directly if other CI test passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] vexilligera commented on issue #16469: Tests of NumPy interoperability

2019-10-15 Thread GitBox
vexilligera commented on issue #16469: Tests of NumPy interoperability
URL: https://github.com/apache/incubator-mxnet/pull/16469#issuecomment-542477763
 
 
   > Please make sure the test can pass CI.
   
   Hi, CI tests have passed.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham opened a new pull request #16500: Fixing broken links

2019-10-15 Thread GitBox
aaronmarkham opened a new pull request #16500: Fixing broken links
URL: https://github.com/apache/incubator-mxnet/pull/16500
 
 
   Lots more to do, but let's get some fixes in there!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] starimpact commented on issue #15102: the grad of lars should be scaled in lbsgd

2019-10-15 Thread GitBox
starimpact commented on issue #15102: the grad of lars should be scaled in lbsgd
URL: 
https://github.com/apache/incubator-mxnet/issues/15102#issuecomment-542476256
 
 
   _l2norm is time consuming for a large parameter, so I suggest it should be:
   ```python
   def _l2norm(self, v):
   "inner product implementation"
   #for big local parameter
   v = v.reshape(-1)
   if len(v) > 10:
   step = len(v)/10+1
   v = v[::step]
   norm = multiply(v, v).asnumpy().sum()
   #norm = (multiply(v, v).sum()).asnumpy()
   norm = math.sqrt(norm)
   return norm
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] pengzhao-intel commented on issue #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
pengzhao-intel commented on issue #16477: added more tests to verify support 
for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#issuecomment-542472043
 
 
   @wuxun-zhang maybe take a look for new tests :)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #15925: [CI] illegal memory access

2019-10-15 Thread GitBox
aaronmarkham commented on issue #15925: [CI] illegal memory access
URL: 
https://github.com/apache/incubator-mxnet/issues/15925#issuecomment-542466581
 
 
   Happened here too:
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Funix-gpu/detail/PR-16496/1/pipeline/294


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on issue #16499: mxnet/tensorRT docker image coredumps

2019-10-15 Thread GitBox
leezu commented on issue #16499: mxnet/tensorRT docker image coredumps 
URL: 
https://github.com/apache/incubator-mxnet/issues/16499#issuecomment-542463446
 
 
   Who owns the Docker images at https://hub.docker.com/u/mxnet ? The tensorRT 
hasn't been updated since a year.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
anirudh2290 commented on a change in pull request #16477: added more tests to 
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335235795
 
 

 ##
 File path: tests/nightly/test_large_vector.py
 ##
 @@ -708,6 +708,182 @@ def test_full():
 assert a[-1] == 3
 
 
+def test_astype():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = x.astype('int32')
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_cast():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = nd.cast(x, np.int32)
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_repeat():
+x = create_vector(size=LARGE_X//2)
+y = nd.repeat(x, repeats=2, axis = 0)
+assert y.shape[0] == LARGE_X
+assert y[1] == 0
+assert y[LARGE_X-1] == LARGE_X//2-1
+
+
+def create_input_for_rounding_ops():
+inp = nd.arange(-LARGE_X//2, LARGE_X//2, dtype=np.float64)
+inp = inp/2
+return inp
+
+
+def test_ceil():
+x = create_input_for_rounding_ops()
+y = nd.ceil(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 1
+assert y[LARGE_X//2+2] == 1
+
+
+def test_fix():
+x = create_input_for_rounding_ops()
+y = nd.fix(x)
+assert y[LARGE_X//2-2] == -1
 
 Review comment:
   can you add some comment of what it is testing, here and for other tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
anirudh2290 commented on a change in pull request #16477: added more tests to 
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335234920
 
 

 ##
 File path: tests/nightly/test_large_vector.py
 ##
 @@ -708,6 +708,182 @@ def test_full():
 assert a[-1] == 3
 
 
+def test_astype():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = x.astype('int32')
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_cast():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = nd.cast(x, np.int32)
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_repeat():
+x = create_vector(size=LARGE_X//2)
+y = nd.repeat(x, repeats=2, axis = 0)
+assert y.shape[0] == LARGE_X
+assert y[1] == 0
+assert y[LARGE_X-1] == LARGE_X//2-1
+
+
+def create_input_for_rounding_ops():
+inp = nd.arange(-LARGE_X//2, LARGE_X//2, dtype=np.float64)
+inp = inp/2
+return inp
+
+
+def test_ceil():
+x = create_input_for_rounding_ops()
+y = nd.ceil(x)
+assert y[LARGE_X//2-2] == -1
 
 Review comment:
   what is this testing, can you add some comment here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
anirudh2290 commented on a change in pull request #16477: added more tests to 
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335235628
 
 

 ##
 File path: tests/nightly/test_large_vector.py
 ##
 @@ -708,6 +708,182 @@ def test_full():
 assert a[-1] == 3
 
 
+def test_astype():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = x.astype('int32')
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_cast():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = nd.cast(x, np.int32)
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_repeat():
+x = create_vector(size=LARGE_X//2)
+y = nd.repeat(x, repeats=2, axis = 0)
+assert y.shape[0] == LARGE_X
+assert y[1] == 0
+assert y[LARGE_X-1] == LARGE_X//2-1
+
+
+def create_input_for_rounding_ops():
+inp = nd.arange(-LARGE_X//2, LARGE_X//2, dtype=np.float64)
+inp = inp/2
+return inp
+
+
+def test_ceil():
+x = create_input_for_rounding_ops()
+y = nd.ceil(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 1
+assert y[LARGE_X//2+2] == 1
+
+
+def test_fix():
+x = create_input_for_rounding_ops()
+y = nd.fix(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_floor():
+x = create_input_for_rounding_ops()
+y = nd.floor(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_rint():
+x = create_input_for_rounding_ops()
+y = nd.rint(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_round():
+x = create_input_for_rounding_ops()
+y = nd.round(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 1
+assert y[LARGE_X//2+2] == 1
+
+
+def test_trunc():
+x = create_input_for_rounding_ops()
+y = nd.trunc(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
 
 Review comment:
   looks like a bunch of repeated code that can be reused in these 5 tests.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anirudh2290 commented on a change in pull request #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
anirudh2290 commented on a change in pull request #16477: added more tests to 
verify support for large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#discussion_r335236059
 
 

 ##
 File path: tests/nightly/test_large_vector.py
 ##
 @@ -708,6 +708,182 @@ def test_full():
 assert a[-1] == 3
 
 
+def test_astype():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = x.astype('int32')
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_cast():
+x = create_vector(size=LARGE_X//4)
+x = nd.tile(x, 4)
+y = nd.cast(x, np.int32)
+assert y.dtype == np.int32
+assert y[-1] == LARGE_X//4-1
+
+
+def test_repeat():
+x = create_vector(size=LARGE_X//2)
+y = nd.repeat(x, repeats=2, axis = 0)
+assert y.shape[0] == LARGE_X
+assert y[1] == 0
+assert y[LARGE_X-1] == LARGE_X//2-1
+
+
+def create_input_for_rounding_ops():
+inp = nd.arange(-LARGE_X//2, LARGE_X//2, dtype=np.float64)
+inp = inp/2
+return inp
+
+
+def test_ceil():
+x = create_input_for_rounding_ops()
+y = nd.ceil(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 1
+assert y[LARGE_X//2+2] == 1
+
+
+def test_fix():
+x = create_input_for_rounding_ops()
+y = nd.fix(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_floor():
+x = create_input_for_rounding_ops()
+y = nd.floor(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_rint():
+x = create_input_for_rounding_ops()
+y = nd.rint(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_round():
+x = create_input_for_rounding_ops()
+y = nd.round(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == -1
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 1
+assert y[LARGE_X//2+2] == 1
+
+
+def test_trunc():
+x = create_input_for_rounding_ops()
+y = nd.trunc(x)
+assert y[LARGE_X//2-2] == -1
+assert y[LARGE_X//2-1] == 0
+assert y[LARGE_X//2] == 0
+assert y[LARGE_X//2+1] == 0
+assert y[LARGE_X//2+2] == 1
+
+
+def test_arcsin():
+x = nd.array([-1, -.707, 0, .707, 1])
+x = nd.tile(x, LARGE_X//5)
+y = nd.arcsin(x)
+assert_almost_equal(y[0].asnumpy(), -np.pi/2, atol=1e-3)
+assert_almost_equal(y[1].asnumpy(), -np.pi/4, atol=1e-3)
+assert_almost_equal(y[-3].asnumpy(), 0, atol=1e-3)
+assert_almost_equal(y[-2].asnumpy(), np.pi/4, atol=1e-3)
+assert_almost_equal(y[-1].asnumpy(), np.pi/2, atol=1e-3)
+
+
+def test_arccos():
+x = nd.array([-1, -.707, 0, .707, 1])
+x = nd.tile(x, LARGE_X//5)
+y = nd.arccos(x)
+assert_almost_equal(y[0].asnumpy(), np.pi, atol=1e-3)
+assert_almost_equal(y[1].asnumpy(), 3*np.pi/4, atol=1e-3)
+assert_almost_equal(y[-3].asnumpy(), np.pi/2, atol=1e-3)
+assert_almost_equal(y[-2].asnumpy(), np.pi/4, atol=1e-3)
+assert_almost_equal(y[-1].asnumpy(), 0, atol=1e-3)
+
+
+def test_arctan():
+x = nd.array([-np.Inf, -1, 0, 1, np.Inf])
+x = nd.tile(x, LARGE_X//5)
+y = nd.arctan(x)
+assert_almost_equal(y[0].asnumpy(), -np.pi/2, atol=1e-3)
+assert_almost_equal(y[1].asnumpy(), -np.pi/4, atol=1e-3)
+assert_almost_equal(y[-3].asnumpy(), 0, atol=1e-3)
+assert_almost_equal(y[-2].asnumpy(), np.pi/4, atol=1e-3)
+assert_almost_equal(y[-1].asnumpy(), np.pi/2, atol=1e-3)
+
+
+def test_sin():
+x = nd.array([-np.pi/2, -np.pi/4, 0, np.pi/4, np.pi/2])
+x = nd.tile(x, LARGE_X//5)
+y = nd.sin(x)
+assert_almost_equal(y[0].asnumpy(), -1, atol=1e-3)
+assert_almost_equal(y[1].asnumpy(), -.707, atol=1e-3)
+assert_almost_equal(y[-3].asnumpy(), 0, atol=1e-3)
+assert_almost_equal(y[-2].asnumpy(), .707, atol=1e-3)
+assert_almost_equal(y[-1].asnumpy(), 1, atol=1e-3)
+
+
+def test_cos():
+x = nd.array([0, np.pi/4, np.pi/2, 3*np.pi/4, np.pi])
+x = nd.tile(x, LARGE_X//5)
+y = nd.cos(x)
+assert_almost_equal(y[0].asnumpy(), 1, atol=1e-3)
+assert_almost_equal(y[1].asnumpy(), .707, atol=1e-3)
+assert_almost_equal(y[-3].asnumpy(), 0, atol=1e-3)
+assert_almost_equal(y[-2].asnumpy(), -.707, atol=1e-3)
+assert_almost_equal(y[-1].asnumpy(), -1, atol=1e-3)
+
+
+def test_tan():
+x = nd.array([-np.pi/4, 0, np.pi/4])
+x = nd.tile(x, LARGE_X//3)
+y = nd.tan(x)
+assert y[0] == -1
+assert y[1] == 0
+assert y[-1] == 1
+
+
+def test_radians():
+x = nd.array([0, 90, 180, 270, 360])
+x = nd.tile(x, LARGE_X//5)
+y = nd.radians(x)
+assert_almost_equal(y[0].asnumpy(), 0, atol=1e-3)
 
 

[GitHub] [incubator-mxnet] wx3000 opened a new issue #16499: mxnet/tensorRT docker image coredumps

2019-10-15 Thread GitBox
wx3000 opened a new issue #16499: mxnet/tensorRT docker image coredumps 
URL: https://github.com/apache/incubator-mxnet/issues/16499
 
 
   ## Description
   
   The docker image mxnet/tensorrt which integrates mxnet with tensorrt core 
dumps. docker image has TRT 4.0, python3.5, cuda9 and MxNet 1.3.
   
   ## Environment info (Required)
   AWS base DLAMI on G4 instance. DLAMI version is 19.2 and OS is ubuntu. 
Installed docker 19.2 and nvidia docker on top of it. 
   
   first install docker:
   https://docs.docker.com/install/linux/docker-ce/ubuntu/
   
   there is one issue with a workaround: 
https://github.com/docker/for-linux/issues/813
   sudo apt-get install runc=1.0.0~rc7+git20190403.029124da-0ubuntu1~16.04.4
   sudo apt-get install docker-ce
   
   then I installed latest nvidia docker (which requires docker 19.03)
   https://github.com/NVIDIA/nvidia-docker
   
   then I pulled the image
   $ docker run --gpus all -it mxnet/tensorrt bash
   
   I followed the python scripts on this page for mxnet/tensorRT integration. 
   
https://github.com/apache/incubator-mxnet/blob/8004a027ad6a73f8f6eae102de8d249fbdfb9a2d/docs/python_docs/python/tutorials/performance/backend/tensorrt/tensorrt.md
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16499: mxnet/tensorRT docker image coredumps

2019-10-15 Thread GitBox
mxnet-label-bot commented on issue #16499: mxnet/tensorRT docker image 
coredumps 
URL: 
https://github.com/apache/incubator-mxnet/issues/16499#issuecomment-542460604
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Feature, Performance


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (06438ab -> 0c00a79)

2019-10-15 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/optimizer/optimizer.py   | 2 +-
 tests/python/unittest/test_numpy_gluon.py | 9 +
 2 files changed, 10 insertions(+), 1 deletion(-)



[incubator-mxnet] branch master updated (06438ab -> 0c00a79)

2019-10-15 Thread haibin
This is an automated email from the ASF dual-hosted git repository.

haibin pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 06438ab  Mxnet allclose (#14443)
 add 0c00a79  Fix optimizer bug for np attribute (#16494)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/optimizer/optimizer.py   | 2 +-
 tests/python/unittest/test_numpy_gluon.py | 9 +
 2 files changed, 10 insertions(+), 1 deletion(-)



[GitHub] [incubator-mxnet] eric-haibin-lin merged pull request #16494: Proper handling of "allow_np_array" attribute in optimizer

2019-10-15 Thread GitBox
eric-haibin-lin merged pull request #16494: Proper handling of "allow_np_array" 
attribute in optimizer
URL: https://github.com/apache/incubator-mxnet/pull/16494
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new issue #16498: [CI] Centos GPU Seg Fault

2019-10-15 Thread GitBox
ChaiBapchya opened a new issue #16498: [CI] Centos GPU Seg Fault
URL: https://github.com/apache/incubator-mxnet/issues/16498
 
 
   Seg Fault
   Unrelated PR - #16497 
   Pipeline - 
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-gpu/detail/PR-16497/1/pipeline
   ```
   src/operator/numpy/../tensor/broadcast_reduce-inl.cuh: In function 'void 
mxnet::op::broadcast::ReduceImpl(cudaStream_t, const mxnet::TBlob&, const 
mxnet::TBlob&, const mxnet::TBlob&, mxnet::OpReqType, const mxnet::TBlob&, 
const mshadow::Tensor&, const 
mxnet::op::broadcast::ReduceImplConfig&) [with Reducer = 
mshadow::red::sum; int ndim = 4; DType = long int; OP1 = 
mxnet::op::mshadow_op::mul; OP2 = mxnet::op::mshadow_op::hypot_grad_left; 
cudaStream_t = CUstream_st*]':
   src/operator/numpy/../tensor/broadcast_reduce-inl.cuh:564:1: internal 
compiler error: Segmentation fault
void ReduceImpl(cudaStream_t stream, const TBlob& small, const TBlob& lhs, 
const TBlob& rhs,
^
   ```


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16498: [CI] Centos GPU Seg Fault

2019-10-15 Thread GitBox
mxnet-label-bot commented on issue #16498: [CI] Centos GPU Seg Fault
URL: 
https://github.com/apache/incubator-mxnet/issues/16498#issuecomment-542454688
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Bug, CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya opened a new pull request #16497: Large Vector tests for DGL Ops Part 2

2019-10-15 Thread GitBox
ChaiBapchya opened a new pull request #16497: Large Vector tests for DGL Ops 
Part 2
URL: https://github.com/apache/incubator-mxnet/pull/16497
 
 
   ## Description ##
   add hyperbolic, logical, sign and regression tests for large vector
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [x] Changes are complete (i.e. I finished coding on this PR)
   - [x] All changes have test coverage:
   - [x] Code is well-documented: 
   - [x] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Add tests to test_large_array.py
   - [x] Fix issue with test_concat introduced in PR #15960 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] anandj91 commented on issue #15124: [MXNET-1294] Priority-based parameter propagation for improved data parallel training throughput

2019-10-15 Thread GitBox
anandj91 commented on issue #15124: [MXNET-1294] Priority-based parameter 
propagation for improved data parallel training throughput
URL: https://github.com/apache/incubator-mxnet/pull/15124#issuecomment-542440290
 
 
   I'm facing some design level challenges to properly implement Priority based 
update (P3) on top of PushPull API. MXNet does a simple load balancing before 
pushing or pulling key-values by splitting NDArrays equally to the parameter 
servers. P3 requires a round-robin style parameter distribution which means 
slicing a large NDArray into thousands of smaller ones. Much more granular than 
current default distribution strategy and each PS would get more than one slice.
   
   With the way mxnet and ps-lite designed right now, ps-lite assumes a single 
ZPush/ZPull/ZPushPull belongs to a single layer/NDArray. It also assumes that 
one slice only belong to one PS. These assumption need to be broken for 
implementing P3. What I have done right now is to add round-robin (RR) 
distribution strategy along with the default one and use a boolean flag to 
switch between these two. When user chooses to use RR, KVStore consider each 
slice as separate key-value pair. Otherwise fallback to the default mode.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (b1932c0 -> 06438ab)

2019-10-15 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/test_utils.py| 188 ++--
 src/operator/contrib/allclose_op-inl.h| 160 +++
 src/operator/contrib/allclose_op.cc   |  86 
 src/operator/contrib/allclose_op.cu   |  58 +++
 tests/python-pytest/onnx/mxnet_export_test.py |   2 +-
 tests/python/gpu/test_gluon_gpu.py| 136 --
 tests/python/gpu/test_gluon_model_zoo_gpu.py  |  14 +-
 tests/python/gpu/test_operator_gpu.py |  59 ++-
 tests/python/mkl/test_mkldnn.py   |  12 +-
 tests/python/unittest/test_gluon_contrib.py   |  21 +-
 tests/python/unittest/test_loss.py|  31 +-
 tests/python/unittest/test_ndarray.py |  27 +-
 tests/python/unittest/test_operator.py| 663 +++---
 tests/python/unittest/test_random.py  |  16 +-
 tests/python/unittest/test_sparse_operator.py |   4 +-
 tests/python/unittest/test_subgraph.py|   7 -
 16 files changed, 938 insertions(+), 546 deletions(-)
 create mode 100644 src/operator/contrib/allclose_op-inl.h
 create mode 100644 src/operator/contrib/allclose_op.cc
 create mode 100644 src/operator/contrib/allclose_op.cu



[incubator-mxnet] branch master updated (b1932c0 -> 06438ab)

2019-10-15 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from b1932c0  Move MRCNNMaskTarget op to contrib (#16486)
 add 06438ab  Mxnet allclose (#14443)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/test_utils.py| 188 ++--
 src/operator/contrib/allclose_op-inl.h| 160 +++
 src/operator/contrib/allclose_op.cc   |  86 
 src/operator/contrib/allclose_op.cu   |  58 +++
 tests/python-pytest/onnx/mxnet_export_test.py |   2 +-
 tests/python/gpu/test_gluon_gpu.py| 136 --
 tests/python/gpu/test_gluon_model_zoo_gpu.py  |  14 +-
 tests/python/gpu/test_operator_gpu.py |  59 ++-
 tests/python/mkl/test_mkldnn.py   |  12 +-
 tests/python/unittest/test_gluon_contrib.py   |  21 +-
 tests/python/unittest/test_loss.py|  31 +-
 tests/python/unittest/test_ndarray.py |  27 +-
 tests/python/unittest/test_operator.py| 663 +++---
 tests/python/unittest/test_random.py  |  16 +-
 tests/python/unittest/test_sparse_operator.py |   4 +-
 tests/python/unittest/test_subgraph.py|   7 -
 16 files changed, 938 insertions(+), 546 deletions(-)
 create mode 100644 src/operator/contrib/allclose_op-inl.h
 create mode 100644 src/operator/contrib/allclose_op.cc
 create mode 100644 src/operator/contrib/allclose_op.cu



[GitHub] [incubator-mxnet] ptrendx merged pull request #14443: Mxnet allclose

2019-10-15 Thread GitBox
ptrendx merged pull request #14443: Mxnet allclose
URL: https://github.com/apache/incubator-mxnet/pull/14443
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
ChaiBapchya commented on issue #16477: added more tests to verify support for 
large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#issuecomment-542438790
 
 
   Randint has been flaky.
   Tracked here https://github.com/apache/incubator-mxnet/issues/16172


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] reminisce commented on issue #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
reminisce commented on issue #16450: Add test pipeline for USE_TVM_OP=OFF on 
Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#issuecomment-542438460
 
 
   @mseth10 I have made changes as you suggested. Could you review again? 
Thanks.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sojiadeshina opened a new pull request #16496: fix missing docs due to git add issues

2019-10-15 Thread GitBox
sojiadeshina opened a new pull request #16496: fix missing docs due to git add 
issues
URL: https://github.com/apache/incubator-mxnet/pull/16496
 
 
   ## Description ##
   Some files were missing from the #16392 pr due to gitignore issue. They are 
now included here. 
   
   This should allow the pages for gluon.data to be rendered properly.
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham opened a new issue #16495: docs for gluon.data.* are missing

2019-10-15 Thread GitBox
aaronmarkham opened a new issue #16495: docs for gluon.data.* are missing
URL: https://github.com/apache/incubator-mxnet/issues/16495
 
 
   Probably related to git filters when adding the files.
   
https://stackoverflow.com/questions/8006393/force-add-despite-the-gitignore-file
   
   
   
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16495: docs for gluon.data.* are missing

2019-10-15 Thread GitBox
mxnet-label-bot commented on issue #16495: docs for gluon.data.* are missing
URL: 
https://github.com/apache/incubator-mxnet/issues/16495#issuecomment-542433508
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Gluon, Doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on a change in pull request #16464: Image crop gpu support

2019-10-15 Thread GitBox
stu1130 commented on a change in pull request #16464: Image crop gpu support
URL: https://github.com/apache/incubator-mxnet/pull/16464#discussion_r335204117
 
 

 ##
 File path: tests/python/gpu/test_gluon_transforms.py
 ##
 @@ -28,76 +28,19 @@
 curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
 sys.path.insert(0, os.path.join(curr_path, '../unittest'))
 from common import assertRaises, setup_module, with_seed, teardown
-
+from test_gluon_data_vision import test_to_tensor, test_normalize, 
test_crop_resize
 
 set_default_context(mx.gpu(0))
 
 @with_seed()
-def test_normalize():
-# 3D Input
-data_in_3d = nd.random.uniform(0, 1, (3, 300, 300))
-out_nd_3d = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))(data_in_3d)
-data_expected_3d = data_in_3d.asnumpy()
-data_expected_3d[:][:][0] = data_expected_3d[:][:][0] / 3.0
-data_expected_3d[:][:][1] = (data_expected_3d[:][:][1] - 1.0) / 2.0
-data_expected_3d[:][:][2] = data_expected_3d[:][:][2] - 2.0
-assert_almost_equal(data_expected_3d, out_nd_3d.asnumpy())
-
-# 4D Input
-data_in_4d = nd.random.uniform(0, 1, (2, 3, 300, 300))
-out_nd_4d = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))(data_in_4d)
-data_expected_4d = data_in_4d.asnumpy()
-data_expected_4d[0][:][:][0] = data_expected_4d[0][:][:][0] / 3.0
-data_expected_4d[0][:][:][1] = (data_expected_4d[0][:][:][1] - 1.0) / 2.0
-data_expected_4d[0][:][:][2] = data_expected_4d[0][:][:][2] - 2.0
-data_expected_4d[1][:][:][0] = data_expected_4d[1][:][:][0] / 3.0
-data_expected_4d[1][:][:][1] = (data_expected_4d[1][:][:][1] - 1.0) / 2.0
-data_expected_4d[1][:][:][2] = data_expected_4d[1][:][:][2] - 2.0
-assert_almost_equal(data_expected_4d, out_nd_4d.asnumpy())
-
-# Default normalize values i.e., mean=0, std=1
-data_in_3d_def = nd.random.uniform(0, 1, (3, 300, 300))
-out_nd_3d_def = transforms.Normalize()(data_in_3d_def)
-data_expected_3d_def = data_in_3d_def.asnumpy()
-assert_almost_equal(data_expected_3d_def, out_nd_3d_def.asnumpy())
+def test_normalize_gpu():
+test_normalize()
 
-# Invalid Input - Neither 3D or 4D input
-invalid_data_in = nd.random.uniform(0, 1, (5, 5, 3, 300, 300))
-normalize_transformer = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))
-assertRaises(MXNetError, normalize_transformer, invalid_data_in)
-
-# Invalid Input - Channel neither 1 or 3
-invalid_data_in = nd.random.uniform(0, 1, (5, 4, 300, 300))
-normalize_transformer = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))
-assertRaises(MXNetError, normalize_transformer, invalid_data_in)
 
 @with_seed()
-def test_to_tensor():
-# 3D Input
-data_in = np.random.uniform(0, 255, (300, 300, 3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert_almost_equal(out_nd.asnumpy(), np.transpose(
-data_in.astype(dtype=np.float32) / 255.0, (2, 0, 1)))
-
-# 4D Input
-data_in = np.random.uniform(0, 255, (5, 300, 300, 
3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert_almost_equal(out_nd.asnumpy(), np.transpose(
-data_in.astype(dtype=np.float32) / 255.0, (0, 3, 1, 
2)))
+def test_to_tensor_gpu():
+test_to_tensor()
 
-# Invalid Input
-invalid_data_in = nd.random.uniform(0, 255, (5, 5, 300, 300, 
3)).astype(dtype=np.uint8)
-transformer = transforms.ToTensor()
-assertRaises(MXNetError, transformer, invalid_data_in)
-
-# Bounds (0->0, 255->1)
-data_in = np.zeros((10, 20, 3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert same(out_nd.asnumpy(), np.transpose(np.zeros(data_in.shape, 
dtype=np.float32), (2, 0, 1)))
-
-data_in = np.full((10, 20, 3), 255).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert same(out_nd.asnumpy(), np.transpose(np.ones(data_in.shape, 
dtype=np.float32), (2, 0, 1)))
 
 @with_seed()
 def test_resize():
 
 Review comment:
   done


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on a change in pull request #16464: Image crop gpu support

2019-10-15 Thread GitBox
roywei commented on a change in pull request #16464: Image crop gpu support
URL: https://github.com/apache/incubator-mxnet/pull/16464#discussion_r335201026
 
 

 ##
 File path: tests/python/gpu/test_gluon_transforms.py
 ##
 @@ -28,76 +28,19 @@
 curr_path = os.path.dirname(os.path.abspath(os.path.expanduser(__file__)))
 sys.path.insert(0, os.path.join(curr_path, '../unittest'))
 from common import assertRaises, setup_module, with_seed, teardown
-
+from test_gluon_data_vision import test_to_tensor, test_normalize, 
test_crop_resize
 
 set_default_context(mx.gpu(0))
 
 @with_seed()
-def test_normalize():
-# 3D Input
-data_in_3d = nd.random.uniform(0, 1, (3, 300, 300))
-out_nd_3d = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))(data_in_3d)
-data_expected_3d = data_in_3d.asnumpy()
-data_expected_3d[:][:][0] = data_expected_3d[:][:][0] / 3.0
-data_expected_3d[:][:][1] = (data_expected_3d[:][:][1] - 1.0) / 2.0
-data_expected_3d[:][:][2] = data_expected_3d[:][:][2] - 2.0
-assert_almost_equal(data_expected_3d, out_nd_3d.asnumpy())
-
-# 4D Input
-data_in_4d = nd.random.uniform(0, 1, (2, 3, 300, 300))
-out_nd_4d = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))(data_in_4d)
-data_expected_4d = data_in_4d.asnumpy()
-data_expected_4d[0][:][:][0] = data_expected_4d[0][:][:][0] / 3.0
-data_expected_4d[0][:][:][1] = (data_expected_4d[0][:][:][1] - 1.0) / 2.0
-data_expected_4d[0][:][:][2] = data_expected_4d[0][:][:][2] - 2.0
-data_expected_4d[1][:][:][0] = data_expected_4d[1][:][:][0] / 3.0
-data_expected_4d[1][:][:][1] = (data_expected_4d[1][:][:][1] - 1.0) / 2.0
-data_expected_4d[1][:][:][2] = data_expected_4d[1][:][:][2] - 2.0
-assert_almost_equal(data_expected_4d, out_nd_4d.asnumpy())
-
-# Default normalize values i.e., mean=0, std=1
-data_in_3d_def = nd.random.uniform(0, 1, (3, 300, 300))
-out_nd_3d_def = transforms.Normalize()(data_in_3d_def)
-data_expected_3d_def = data_in_3d_def.asnumpy()
-assert_almost_equal(data_expected_3d_def, out_nd_3d_def.asnumpy())
+def test_normalize_gpu():
+test_normalize()
 
-# Invalid Input - Neither 3D or 4D input
-invalid_data_in = nd.random.uniform(0, 1, (5, 5, 3, 300, 300))
-normalize_transformer = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))
-assertRaises(MXNetError, normalize_transformer, invalid_data_in)
-
-# Invalid Input - Channel neither 1 or 3
-invalid_data_in = nd.random.uniform(0, 1, (5, 4, 300, 300))
-normalize_transformer = transforms.Normalize(mean=(0, 1, 2), std=(3, 2, 1))
-assertRaises(MXNetError, normalize_transformer, invalid_data_in)
 
 @with_seed()
-def test_to_tensor():
-# 3D Input
-data_in = np.random.uniform(0, 255, (300, 300, 3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert_almost_equal(out_nd.asnumpy(), np.transpose(
-data_in.astype(dtype=np.float32) / 255.0, (2, 0, 1)))
-
-# 4D Input
-data_in = np.random.uniform(0, 255, (5, 300, 300, 
3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert_almost_equal(out_nd.asnumpy(), np.transpose(
-data_in.astype(dtype=np.float32) / 255.0, (0, 3, 1, 
2)))
+def test_to_tensor_gpu():
+test_to_tensor()
 
-# Invalid Input
-invalid_data_in = nd.random.uniform(0, 255, (5, 5, 300, 300, 
3)).astype(dtype=np.uint8)
-transformer = transforms.ToTensor()
-assertRaises(MXNetError, transformer, invalid_data_in)
-
-# Bounds (0->0, 255->1)
-data_in = np.zeros((10, 20, 3)).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert same(out_nd.asnumpy(), np.transpose(np.zeros(data_in.shape, 
dtype=np.float32), (2, 0, 1)))
-
-data_in = np.full((10, 20, 3), 255).astype(dtype=np.uint8)
-out_nd = transforms.ToTensor()(nd.array(data_in, dtype='uint8'))
-assert same(out_nd.asnumpy(), np.transpose(np.ones(data_in.shape, 
dtype=np.float32), (2, 0, 1)))
 
 @with_seed()
 def test_resize():
 
 Review comment:
   can you rename this to test_resize_gpu for consistency, so people know there 
is a difference from test_resize()


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335190974
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 Review comment:
   We do not need this library set, as a result of above comment:
   https://github.com/apache/incubator-mxnet/pull/16450/files#r335190763


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335188815
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 // Python wheels
 mx_pip = 'build/*.whl'
 
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_lib_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
 mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
+mx_cmake_lib_cython_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
 
 Review comment:
   We do not need this library set, as a result of above comment:
   https://github.com/apache/incubator-mxnet/pull/16450/files#r335187907


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335190974
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 Review comment:
   We do not need this library set, as a result of this:
   https://github.com/apache/incubator-mxnet/pull/16450/files#r335190763


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335190763
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -756,6 +802,22 @@ def test_unix_python3_gpu() {
 }]
 }
 
+def test_unix_python3_gpu_no_tvm_op() {
+return ['Python3: GPU TVM_OP OFF': {
+  node(NODE_LINUX_GPU) {
+ws('workspace/ut-python3-gpu-no-tvm-op') {
+  try {
+utils.unpack_and_init('gpu_no_tvm_op', mx_lib_cython_no_tvm_op)
 
 Review comment:
   The build corresponding to this test packs library set 
`mx_lib_cpp_examples_no_tvm_op `
   
https://github.com/apache/incubator-mxnet/pull/16450/files#diff-bb61a49bf10098c4c42879f1632fdb40R273
   
   We should use the same library set while unpacking. A new library set is not 
required.
   Suggested action: Replace `mx_lib_cython_no_tvm_op ` with 
`mx_lib_cpp_examples_no_tvm_op `


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch numpy_pr_merge updated (63500c5 -> b1932c0)

2019-10-15 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch numpy_pr_merge
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 63500c5  add numpy op logspace (#15825)
omit bf8bd40  Numpy compatible vsplit; minor changes to split (#15983)
omit 80982ec  numpy eye op (#16132)
omit 974327e  [Numpy] Numpy compatible dstack (#15871)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (63500c5)
\
 N -- N -- N   refs/heads/numpy_pr_merge (b1932c0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  44 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 164 -
 python/mxnet/numpy/multiarray.py   | 161 -
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/symbol/numpy/_symbol.py   | 151 -
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} |  44 +--
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  |  56 ++--
 src/operator/contrib/reset_arrays-inl.h|  92 ++
 src/operator/contrib/reset_arrays.cc   |  74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/mshadow_op.h  |   2 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |   4 +
 src/operator/numpy/np_matrix_op-inl.h  | 367 +
 src/operator/numpy/np_matrix_op.cc | 168 ++
 src/operator/numpy/np_matrix_op.cu |  12 +
 src/operator/operator_tune.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h| 115 ---
 tests/python/unittest/test_contrib_operator.py |  58 
 tests/python/unittest/test_gluon.py|  66 +++-
 tests/python/unittest/test_numpy_ndarray.py|  23 ++
 tests/python/unittest/test_numpy_op.py | 148 +
 tests/python/unittest/test_operator.py |  57 
 28 files changed, 1705 insertions(+), 196 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335188815
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 // Python wheels
 mx_pip = 'build/*.whl'
 
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_lib_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
 mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
+mx_cmake_lib_cython_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
 
 Review comment:
   We do not need this library set, as a result of this:
   https://github.com/apache/incubator-mxnet/pull/16450/files#r335187907


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch numpy_pr_merge updated (63500c5 -> b1932c0)

2019-10-15 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch numpy_pr_merge
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 63500c5  add numpy op logspace (#15825)
omit bf8bd40  Numpy compatible vsplit; minor changes to split (#15983)
omit 80982ec  numpy eye op (#16132)
omit 974327e  [Numpy] Numpy compatible dstack (#15871)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (63500c5)
\
 N -- N -- N   refs/heads/numpy_pr_merge (b1932c0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  44 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 164 -
 python/mxnet/numpy/multiarray.py   | 161 -
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/symbol/numpy/_symbol.py   | 151 -
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} |  44 +--
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  |  56 ++--
 src/operator/contrib/reset_arrays-inl.h|  92 ++
 src/operator/contrib/reset_arrays.cc   |  74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/mshadow_op.h  |   2 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |   4 +
 src/operator/numpy/np_matrix_op-inl.h  | 367 +
 src/operator/numpy/np_matrix_op.cc | 168 ++
 src/operator/numpy/np_matrix_op.cu |  12 +
 src/operator/operator_tune.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h| 115 ---
 tests/python/unittest/test_contrib_operator.py |  58 
 tests/python/unittest/test_gluon.py|  66 +++-
 tests/python/unittest/test_numpy_ndarray.py|  23 ++
 tests/python/unittest/test_numpy_op.py | 148 +
 tests/python/unittest/test_operator.py |  57 
 28 files changed, 1705 insertions(+), 196 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[incubator-mxnet] branch numpy_pr_merge updated (63500c5 -> b1932c0)

2019-10-15 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch numpy_pr_merge
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 63500c5  add numpy op logspace (#15825)
omit bf8bd40  Numpy compatible vsplit; minor changes to split (#15983)
omit 80982ec  numpy eye op (#16132)
omit 974327e  [Numpy] Numpy compatible dstack (#15871)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (63500c5)
\
 N -- N -- N   refs/heads/numpy_pr_merge (b1932c0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  44 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 164 -
 python/mxnet/numpy/multiarray.py   | 161 -
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/symbol/numpy/_symbol.py   | 151 -
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} |  44 +--
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  |  56 ++--
 src/operator/contrib/reset_arrays-inl.h|  92 ++
 src/operator/contrib/reset_arrays.cc   |  74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/mshadow_op.h  |   2 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |   4 +
 src/operator/numpy/np_matrix_op-inl.h  | 367 +
 src/operator/numpy/np_matrix_op.cc | 168 ++
 src/operator/numpy/np_matrix_op.cu |  12 +
 src/operator/operator_tune.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h| 115 ---
 tests/python/unittest/test_contrib_operator.py |  58 
 tests/python/unittest/test_gluon.py|  66 +++-
 tests/python/unittest/test_numpy_ndarray.py|  23 ++
 tests/python/unittest/test_numpy_op.py | 148 +
 tests/python/unittest/test_operator.py |  57 
 28 files changed, 1705 insertions(+), 196 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335187907
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -273,6 +305,20 @@ def compile_unix_cmake_gpu() {
 }]
 }
 
+def compile_unix_cmake_gpu_no_tvm_op() {
+return ['GPU: CMake TVM_OP OFF': {
+  node(NODE_LINUX_CPU) {
+ws('workspace/build-cmake-gpu-no-tvm-op') {
+  timeout(time: max_time, unit: 'MINUTES') {
+utils.init_git()
+utils.docker_run('ubuntu_gpu_cu101', 
'build_ubuntu_gpu_cmake_no_tvm_op', false)
+utils.pack_lib('cmake_gpu_no_tvm_op', 
mx_cmake_lib_cython_no_tvm_op)
 
 Review comment:
   We don't need to pack libraries here, as there is no test stage that uses it.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch numpy_pr_merge updated (63500c5 -> b1932c0)

2019-10-15 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch numpy_pr_merge
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 63500c5  add numpy op logspace (#15825)
omit bf8bd40  Numpy compatible vsplit; minor changes to split (#15983)
omit 80982ec  numpy eye op (#16132)
omit 974327e  [Numpy] Numpy compatible dstack (#15871)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (63500c5)
\
 N -- N -- N   refs/heads/numpy_pr_merge (b1932c0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  44 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 164 -
 python/mxnet/numpy/multiarray.py   | 161 -
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/symbol/numpy/_symbol.py   | 151 -
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} |  44 +--
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  |  56 ++--
 src/operator/contrib/reset_arrays-inl.h|  92 ++
 src/operator/contrib/reset_arrays.cc   |  74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/mshadow_op.h  |   2 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |   4 +
 src/operator/numpy/np_matrix_op-inl.h  | 367 +
 src/operator/numpy/np_matrix_op.cc | 168 ++
 src/operator/numpy/np_matrix_op.cu |  12 +
 src/operator/operator_tune.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h| 115 ---
 tests/python/unittest/test_contrib_operator.py |  58 
 tests/python/unittest/test_gluon.py|  66 +++-
 tests/python/unittest/test_numpy_ndarray.py|  23 ++
 tests/python/unittest/test_numpy_op.py | 148 +
 tests/python/unittest/test_operator.py |  57 
 28 files changed, 1705 insertions(+), 196 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[incubator-mxnet] branch numpy_pr_merge updated (63500c5 -> b1932c0)

2019-10-15 Thread haoj
This is an automated email from the ASF dual-hosted git repository.

haoj pushed a change to branch numpy_pr_merge
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


omit 63500c5  add numpy op logspace (#15825)
omit bf8bd40  Numpy compatible vsplit; minor changes to split (#15983)
omit 80982ec  numpy eye op (#16132)
omit 974327e  [Numpy] Numpy compatible dstack (#15871)
 add 812e504  fix autodoc for spurrious toggles (#16452)
 add 7ce  Fix dtype bug (#16467)
 add 9ab428e  [Doc] Update the download page with 1.5.1 release (#16442)
 add 6e0b1a5  [Numpy] Numpy compatible dstack (#15871)
 add ceebcaf  numpy eye op (#16132)
 add 8222979  Numpy compatible vsplit; minor changes to split (#15983)
 add 8562adc  add numpy op logspace (#15825)
 add 9681197  add numpy op bitwise_xor, hsplit, moveaxis, rot90 (#16257)
 add f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

This update added new revisions after undoing existing revisions.
That is to say, some revisions that were in the old version of the
branch are not in the new version.  This situation occurs
when a user --force pushes a change and generates a repository
containing something like this:

 * -- * -- B -- O -- O -- O   (63500c5)
\
 N -- N -- N   refs/heads/numpy_pr_merge (b1932c0)

You should already have received notification emails for all of the O
revisions, and so the following emails describe only the N revisions
from the common base, B.

Any revisions marked "omit" are not gone; other references still
refer to them.  Any revisions marked "discard" are gone forever.

No new revisions were added by this update.

Summary of changes:
 ci/other/pylintrc  |   7 +-
 docs/python_docs/_static/autodoc.js|  34 +-
 .../src/_includes/get_started/get_started.html |   6 +-
 .../src/_includes/get_started/pip_snippet.md   |   2 +-
 docs/static_site/src/pages/get_started/download.md |   1 +
 python/mxnet/_numpy_op_doc.py  |  44 +++
 python/mxnet/gluon/parameter.py|  21 +-
 python/mxnet/ndarray/numpy/_op.py  | 164 -
 python/mxnet/numpy/multiarray.py   | 161 -
 python/mxnet/numpy_op_signature.py |   5 +-
 python/mxnet/symbol/numpy/_symbol.py   | 151 -
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} |  44 +--
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  |  56 ++--
 src/operator/contrib/reset_arrays-inl.h|  92 ++
 src/operator/contrib/reset_arrays.cc   |  74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} |  18 +-
 src/operator/mshadow_op.h  |   2 +
 src/operator/numpy/np_elemwise_broadcast_op.cu |   4 +
 src/operator/numpy/np_matrix_op-inl.h  | 367 +
 src/operator/numpy/np_matrix_op.cc | 168 ++
 src/operator/numpy/np_matrix_op.cu |  12 +
 src/operator/operator_tune.cc  |   1 +
 src/operator/tensor/matrix_op-inl.h| 115 ---
 tests/python/unittest/test_contrib_operator.py |  58 
 tests/python/unittest/test_gluon.py|  66 +++-
 tests/python/unittest/test_numpy_ndarray.py|  23 ++
 tests/python/unittest/test_numpy_op.py | 148 +
 tests/python/unittest/test_operator.py |  57 
 28 files changed, 1705 insertions(+), 196 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[GitHub] [incubator-mxnet] ChaiBapchya commented on issue #16416: Dgl ops 2

2019-10-15 Thread GitBox
ChaiBapchya commented on issue #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#issuecomment-542411988
 
 
   Batch_dot operator hasn't been tested for large array before. Atleast not in 
the test_large_array.py file


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 commented on issue #16493: Tests of interoperability of numpy dispatch

2019-10-15 Thread GitBox
haojin2 commented on issue #16493: Tests of interoperability of numpy dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16493#issuecomment-542411471
 
 
   @xiezhq-hermann please fix the CI errors.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience edited a comment on issue #16416: Dgl ops 2

2019-10-15 Thread GitBox
sxjscience edited a comment on issue #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#issuecomment-542409048
 
 
   ~~I think we've tested for batch_dot before~~
   
   Okay, this is specific to large array.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on issue #16416: Dgl ops 2

2019-10-15 Thread GitBox
sxjscience commented on issue #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#issuecomment-542409048
 
 
   I think we've tested for batch_dot before


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] haojin2 opened a new pull request #16494: Proper handling of "allow_np_array" attribute in optimizer

2019-10-15 Thread GitBox
haojin2 opened a new pull request #16494: Proper handling of "allow_np_array" 
attribute in optimizer
URL: https://github.com/apache/incubator-mxnet/pull/16494
 
 
   ## Description ##
   As title.
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [x] Fix of the logic
   - [x] Unit test
   
   ## Comments ##
   @eric-haibin-lin @reminisce 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 commented on issue #16401: CI showing random pylint failures

2019-10-15 Thread GitBox
DickJC123 commented on issue #16401: CI showing random pylint failures
URL: 
https://github.com/apache/incubator-mxnet/issues/16401#issuecomment-542400603
 
 
   Issue fixed by now-merged 
https://github.com/apache/incubator-mxnet/pull/16462.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] DickJC123 closed issue #16401: CI showing random pylint failures

2019-10-15 Thread GitBox
DickJC123 closed issue #16401: CI showing random pylint failures
URL: https://github.com/apache/incubator-mxnet/issues/16401
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] xiezhq-hermann opened a new pull request #16493: Tests of interoperability of numpy dispatch

2019-10-15 Thread GitBox
xiezhq-hermann opened a new pull request #16493: Tests of interoperability of 
numpy dispatch
URL: https://github.com/apache/incubator-mxnet/pull/16493
 
 
   ## Description ##
   Tests of interoperability of numpy dispatch, this PR covers these OPs:
   - concatenate
   - copy
   - expand_dims
   - expm1
   - norm
   - svd
   - split
   - squeeze
   - std
   - swapaxes
   - tensorfot
   - tile
   - trace
   - transpose
   - tril
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   Known common issues:
   - data type coverage
   - some module (like masked array, np.matrix, etc) not supported
   - python native scalar and list objects are not dispatchable
   
   @reminisce Thanks for your effort to review the test codes, please feedback 
me any problems.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16416: Dgl ops 2

2019-10-15 Thread GitBox
ChaiBapchya commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335171324
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -1212,6 +1214,80 @@ def test_full():
 assert a[-1][-1] == 3
 
 
+def test_hyperbolic():
+def test_arccosh(a):
 
 Review comment:
   We are doing the last element check to save on time. But this is being done 
for each test (>50) so doesn't make sense to add for this specific test. In 
that case, should i add it in beginning of the file?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ChaiBapchya commented on a change in pull request #16416: Dgl ops 2

2019-10-15 Thread GitBox
ChaiBapchya commented on a change in pull request #16416: Dgl ops 2
URL: https://github.com/apache/incubator-mxnet/pull/16416#discussion_r335171324
 
 

 ##
 File path: tests/nightly/test_large_array.py
 ##
 @@ -1212,6 +1214,80 @@ def test_full():
 assert a[-1][-1] == 3
 
 
+def test_hyperbolic():
+def test_arccosh(a):
 
 Review comment:
   We are doing the last element check to save on time. But this is being done 
for each test (>50) so doesn't make sense to add for this specific test. In 
that case, should i add it to the top of the function?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 closed issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
stu1130 closed issue #16491: CI failed due to DockerHub outage
URL: https://github.com/apache/incubator-mxnet/issues/16491
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix HybridBlock when hybridize is not called

2019-10-15 Thread GitBox
leezu commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix 
HybridBlock when hybridize is not called
URL: https://github.com/apache/incubator-mxnet/pull/16465#discussion_r335161765
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -1054,34 +1098,16 @@ def register_op_hook(self, callback, 
monitor_all=False):
 def forward(self, x, *args):
 """Defines the forward computation. Arguments can be either
 :py:class:`NDArray` or :py:class:`Symbol`."""
-flatten_args = _flatten([x] + list(args), 'inputs')[0]
-is_ndarray = None
-ctx = None
-exist_sym_nd = False
-for ele in flatten_args:
-if isinstance(ele, NDArray):
-if is_ndarray is False:
-raise ValueError('In HybridBlock, we do not support mixed 
NDArrays and Symbols'
- ' types for the input.\n'
- 'Received types are: {}.'
- .format([type(ele) for ele in 
flatten_args]))
-is_ndarray = True
-exist_sym_nd = True
-ctx = ele.context
 
 Review comment:
   We should get rid of choosing one array and using it's context as default 
context. For parameters, users should get the array via 
`self.weight.data(ctx)`. For the time being I suggest not to break the 
behaviour, to avoid unintended consequences


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] access2rohit commented on issue #16409: adding tests to verify large tensor support for more operators

2019-10-15 Thread GitBox
access2rohit commented on issue #16409: adding tests to verify large tensor 
support for more operators
URL: https://github.com/apache/incubator-mxnet/pull/16409#issuecomment-542389259
 
 
   test_large_array.test_gluon_embedding ... ok
   test_large_array.test_ndarray_zeros ... ok
   test_large_array.test_ndarray_ones ... ok
   test_large_array.test_ndarray_convert ... ok
   test_large_array.test_ndarray_random_uniform ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=583459780 to reproduce.
   ok
   test_large_array.test_ndarray_random_randint ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1561187177 to reproduce.
   ok
   test_large_array.test_ndarray_random_exponential ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=488723450 to reproduce.
   ok
   test_large_array.test_ndarray_random_gamma ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1246538300 to reproduce.
   ok
   test_large_array.test_ndarray_random_multinomial ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=279077978 to reproduce.
   
   
   ok
   test_large_array.test_ndarray_random_generalized_negative_binomial ... 
[DEBUG] Setting test np/mx/python random seeds, use MXNET_TEST_SEED=582809580 
to reproduce.
   ok
   test_large_array.test_ndarray_random_negative_binomial ... [DEBUG] Setting 
test np/mx/python random seeds, use MXNET_TEST_SEED=1682432459 to reproduce.
   ok
   test_large_array.test_ndarray_random_normal ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=721628740 to reproduce.
   ok
   test_large_array.test_ndarray_random_poisson ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1614267541 to reproduce.
   ok
   test_large_array.test_ndarray_random_randn ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1931064888 to reproduce.
   ok
   test_large_array.test_ndarray_random_shuffle ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=887032379 to reproduce.
   ok
   test_large_array.test_ndarray_empty ... ok
   test_large_array.test_elementwise ... ok
   test_large_array.test_reduce ... ok
   test_large_array.test_dot ... ok
   test_large_array.test_FullyConnected ... ok
   test_large_array.test_broadcast ... ok
   test_large_array.test_clip ... ok
   test_large_array.test_split ... ok
   test_large_array.test_argmin ... ok
   test_large_array.test_tile ... ok
   test_large_array.test_take ... ok
   test_large_array.test_slice ... ok
   test_large_array.test_slice_assign ... ok
   test_large_array.test_expand_dims ... ok
   Helper function that cleans up memory by releasing it from memory pool ... ok
   test_large_array.test_squeeze ... ok
   test_large_array.test_broadcast_div ... ok
   test_large_array.test_Dense ... ok
   test_large_array.test_where ... ok
   test_large_array.test_pick ... ok
   test_large_array.test_depthtospace ... ok
   test_large_array.test_spacetodepth ... ok
   test_large_array.test_diag ... [DEBUG] Setting test np/mx/python random 
seeds, use MXNET_TEST_SEED=1737100945 to reproduce.
   ok
   test_large_array.test_ravel_multi_index ... [DEBUG] Setting test 
np/mx/python random seeds, use MXNET_TEST_SEED=1577038190 to reproduce.
   ok
   test_large_array.test_unravel_index ... [DEBUG] Setting test np/mx/python 
random seeds, use MXNET_TEST_SEED=754939684 to reproduce.
   ok
   test_large_array.test_transpose ... ok
   test_large_array.test_swapaxes ... ok
   test_large_array.test_flip ... ok
   
   
   
   test_large_array.test_softmax ... ok
   test_large_array.test_argsort ... ok
   test_large_array.test_sort ... ok
   test_large_array.test_topk ... ok
   test_large_array.test_exponent_logarithm_operators ... ok
   test_large_array.test_power_operators ... ok
   
   
   test_large_array.test_sequence_mask ... ok
   test_large_array.test_sequence_reverse ... ok
   test_large_array.test_sequence_last ... ok
   test_large_array.test_softmax_cross_entropy ... ok
   test_large_array.test_index_copy ... ok
   test_large_array.testSoftmaxOutput ... ok
   test_large_array.test_leaky_relu ... ok
   test_large_array.test_pooling ... ok
   test_large_array.test_layer_norm ... ok
   
   test_large_array.test_dropout ... ok
   test_large_array.test_activation ... ok
   test_large_array.test_batchnorm ... ok
   test_large_array.test_add ... ok
   test_large_array.test_sub ... ok
   test_large_array.test_rsub ... ok
   test_large_array.test_neg ... ok
   test_large_array.test_mul ... ok
   test_large_array.test_div ... ok
   test_large_array.test_rdiv ... ok
   test_large_array.test_mod ... ok
   test_large_array.test_rmod ... ok
   test_large_array.test_imod ... ok
   test_large_array.test_pow ... ok
   test_large_array.test_rpow ... ok
   test_large_array.test_shape ... ok
   test_large_array.test_size ... ok
   test_large_array.test_copy ... ok
   test_large_array.test_copy_to ... ok
   

[GitHub] [incubator-mxnet] anirudh2290 commented on issue #16477: added more tests to verify support for large vector

2019-10-15 Thread GitBox
anirudh2290 commented on issue #16477: added more tests to verify support for 
large vector
URL: https://github.com/apache/incubator-mxnet/pull/16477#issuecomment-542380621
 
 
   randint not failing now ?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (67e1e68 -> b1932c0)

2019-10-15 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

No new revisions were added by this update.

Summary of changes:
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} | 44 
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  | 56 ++---
 tests/python/unittest/test_contrib_operator.py | 58 ++
 tests/python/unittest/test_operator.py | 57 -
 4 files changed, 108 insertions(+), 107 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)



[incubator-mxnet] branch master updated (67e1e68 -> b1932c0)

2019-10-15 Thread zhreshold
This is an automated email from the ASF dual-hosted git repository.

zhreshold pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from 67e1e68  Aggregated zero grad (#16446)
 add b1932c0  Move MRCNNMaskTarget op to contrib (#16486)

No new revisions were added by this update.

Summary of changes:
 ...{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} | 44 
 .../{mrcnn_target.cu => mrcnn_mask_target.cu}  | 56 ++---
 tests/python/unittest/test_contrib_operator.py | 58 ++
 tests/python/unittest/test_operator.py | 57 -
 4 files changed, 108 insertions(+), 107 deletions(-)
 rename src/operator/contrib/{mrcnn_target-inl.h => mrcnn_mask_target-inl.h} 
(70%)
 rename src/operator/contrib/{mrcnn_target.cu => mrcnn_mask_target.cu} (83%)



[GitHub] [incubator-mxnet] zhreshold merged pull request #16486: Move mrcnn_mask_target op to contrib

2019-10-15 Thread GitBox
zhreshold merged pull request #16486: Move mrcnn_mask_target op to contrib
URL: https://github.com/apache/incubator-mxnet/pull/16486
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] zhreshold commented on issue #16486: Move mrcnn_mask_target op to contrib

2019-10-15 Thread GitBox
zhreshold commented on issue #16486: Move mrcnn_mask_target op to contrib
URL: https://github.com/apache/incubator-mxnet/pull/16486#issuecomment-542357930
 
 
   thanks for the fix!


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335124096
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 // Python wheels
 mx_pip = 'build/*.whl'
 
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_lib_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
 mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
+mx_cmake_lib_cython_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests'
 mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, 
build/3rdparty/mkldnn/src/libmkldnn.so.0'
 mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, lib/libiomp5.so, lib/libmkldnn.so.0, 
lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, 
build/libtvmop.so, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, 
lib/libonnx_proto.so, lib/libonnx.so'
 mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, 
deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cpp_examples_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, 
libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, 
deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 mx_lib_cpp_examples_cpu = 'build/libmxnet.so, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/cpp-package/example/*'
 
 Review comment:
   I took a look and let @reminisce know offline. `mx_cmake_lib_no_tvm_op` is 
the binary set corresponding to a cmake build and hence we need to put it in a 
`build/` directory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mseth10 commented on a change in pull request #16450: Add test pipeline for USE_TVM_OP=OFF on Unix

2019-10-15 Thread GitBox
mseth10 commented on a change in pull request #16450: Add test pipeline for 
USE_TVM_OP=OFF on Unix
URL: https://github.com/apache/incubator-mxnet/pull/16450#discussion_r335124096
 
 

 ##
 File path: ci/jenkins/Jenkins_steps.groovy
 ##
 @@ -25,19 +25,23 @@ utils = load('ci/Jenkinsfile_utils.groovy')
 // mxnet libraries
 mx_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_lib_cython = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cython_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, libsample_lib.so, 
3rdparty/dmlc-core/libdmlc.a, 3rdparty/tvm/nnvm/lib/libnnvm.a, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 
 // Python wheels
 mx_pip = 'build/*.whl'
 
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so'
+mx_cmake_lib_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests, build/3rdparty/openmp/runtime/src/libomp.so'
 mx_cmake_lib_cython = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
+mx_cmake_lib_cython_no_tvm_op = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, python/mxnet/_cy2/*.so, 
python/mxnet/_cy3/*.so'
 // mxnet cmake libraries, in cmake builds we do not produce a libnvvm static 
library by default.
 mx_cmake_lib_debug = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/libsample_lib.so, build/3rdparty/dmlc-core/libdmlc.a, 
build/tests/mxnet_unit_tests'
 mx_cmake_mkldnn_lib = 'build/libmxnet.so, build/libmxnet.a, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/3rdparty/dmlc-core/libdmlc.a, build/tests/mxnet_unit_tests, 
build/3rdparty/openmp/runtime/src/libomp.so, 
build/3rdparty/mkldnn/src/libmkldnn.so.0'
 mx_mkldnn_lib = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, lib/libiomp5.so, lib/libmkldnn.so.0, 
lib/libmklml_intel.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a'
 mx_tensorrt_lib = 'build/libmxnet.so, build/3rdparty/tvm/libtvm_runtime.so, 
build/libtvmop.so, lib/libnvonnxparser_runtime.so.0, lib/libnvonnxparser.so.0, 
lib/libonnx_proto.so, lib/libonnx.so'
 mx_lib_cpp_examples = 'lib/libmxnet.so, lib/libmxnet.a, lib/libtvm_runtime.so, 
lib/libtvmop.so, libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, 
deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
+mx_lib_cpp_examples_no_tvm_op = 'lib/libmxnet.so, lib/libmxnet.a, 
libsample_lib.so, 3rdparty/dmlc-core/libdmlc.a, 
3rdparty/tvm/nnvm/lib/libnnvm.a, 3rdparty/ps-lite/build/libps.a, 
deps/lib/libprotobuf-lite.a, deps/lib/libzmq.a, build/cpp-package/example/*, 
python/mxnet/_cy2/*.so, python/mxnet/_cy3/*.so'
 mx_lib_cpp_examples_cpu = 'build/libmxnet.so, 
build/3rdparty/tvm/libtvm_runtime.so, build/libtvmop.so, 
build/cpp-package/example/*'
 
 Review comment:
   I took a look and let @reminisce know offline. `mx_cmake_lib_no_tvm_op ` is 
the binary set corresponding to a cmake build and hence we need to put it in a 
`build/` directory.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16492: Broken Links - Python Tutorial | Getting Started

2019-10-15 Thread GitBox
mxnet-label-bot commented on issue #16492: Broken Links - Python Tutorial | 
Getting Started 
URL: 
https://github.com/apache/incubator-mxnet/issues/16492#issuecomment-542355915
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Doc


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 opened a new issue #16492: Broken Links - Python Tutorial | Getting Started

2019-10-15 Thread GitBox
TEChopra1000 opened a new issue #16492: Broken Links - Python Tutorial | 
Getting Started 
URL: https://github.com/apache/incubator-mxnet/issues/16492
 
 
   **Broken Links Found in the [Crash 
Course](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/crash-course/index.html#)**
   
   * previous tutorial | [found here 
](https://mxnet.incubator.apache.org/api/python/docs/tutorials/getting-started/crash-course/5-predict.html)
   * previous tutorial | [found 
here](https://mxnet.incubator.apache.org/api/python/docs/tutorials/getting-started/crash-course/6-use_gpus.html)
   
   **Broken links found in [Moving to MXNet from Other 
Frameworks](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/to-mxnet/index.html)**
   
   * Caffe to MXNet box
   *  Caffe to MXNet does not show up on left-menu under Moving to MXNet from 
Other Frameworks
   
   **Broken links found in [PyTorch vs Apache 
MXNet](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/to-mxnet/pytorch.html)**
   
   A majority of links are broken on this page and so I will not list them all 
here. 
   
   **Broken links found in [Gluon: from experiment to 
deployment](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/gluon_from_experiment_to_deployment.html)**
   
   A majority of links are broken on this page and so I will not list them all 
here. 
   
   **Broken Links found in [Logistic Regression 
Explained](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/logistic_regression_explained.html)**
   
   All of the links in this paragraph are broken:
   
   To work with data, Apache MXNet provides 
[Dataset](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.Dataset)
 and 
[DataLoader](https://mxnet.apache.org/api/python/gluon/data.html#mxnet.gluon.data.DataLoader)
 classes. The former is used to provide an indexed access to the data, the 
latter is used to shuffle and batchify the data. To learn more about working 
with data in Gluon, please refer to [Gluon Datasets and 
Dataloaders](https://mxnet.apache.org/tutorials/gluon/datasets.html) tutorial.
   
   
   I think that the link in the word Xavier further down the page is meant to 
point 
[here](https://mxnet.apache.org/api/python/docs/api/initializer/index.html#mxnet.initializer.Xavier)
 or [here] 
(https://mxnet.apache.org/api/python/docs/api/initializer/index.html).
   
   **Incorrect links found in [Logistic Regression 
Explained](https://mxnet.apache.org/api/python/docs/tutorials/getting-started/logistic_regression_explained.html)**
   
   HybridSequential
   Sigmoid
   LogisticRegressionOutput


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet-site] branch asf-site updated: Bump the publish timestamp.

2019-10-15 Thread aaronmarkham
This is an automated email from the ASF dual-hosted git repository.

aaronmarkham pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet-site.git


The following commit(s) were added to refs/heads/asf-site by this push:
 new 0fdc502  Bump the publish timestamp.
0fdc502 is described below

commit 0fdc5023907506b5ab814d17f4923d5a4b1ba574
Author: mxnet-ci 
AuthorDate: Tue Oct 15 18:43:10 2019 +

Bump the publish timestamp.
---
 date.txt | 1 +
 1 file changed, 1 insertion(+)

diff --git a/date.txt b/date.txt
new file mode 100644
index 000..48c17e8
--- /dev/null
+++ b/date.txt
@@ -0,0 +1 @@
+Tue Oct 15 18:43:10 UTC 2019



[GitHub] [incubator-mxnet] aaronmarkham closed issue #16322: [website] rendering issue on implementing ops faq

2019-10-15 Thread GitBox
aaronmarkham closed issue #16322: [website] rendering issue on implementing ops 
faq
URL: https://github.com/apache/incubator-mxnet/issues/16322
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 closed issue #16482: Broken Links: Handwritten Digit Recognition (Python)

2019-10-15 Thread GitBox
TEChopra1000 closed issue #16482: Broken Links: Handwritten Digit Recognition 
(Python)
URL: https://github.com/apache/incubator-mxnet/issues/16482
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #16482: Broken Links: Handwritten Digit Recognition (Python)

2019-10-15 Thread GitBox
TEChopra1000 commented on issue #16482: Broken Links: Handwritten Digit 
Recognition (Python)
URL: 
https://github.com/apache/incubator-mxnet/issues/16482#issuecomment-542339846
 
 
   I'm going to put all broken links from the python tutorial into one ticket 
and so I'll close this one. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ThomasDelteil commented on issue #16490: Correct Google Analytics Tracker

2019-10-15 Thread GitBox
ThomasDelteil commented on issue #16490: Correct Google Analytics Tracker
URL: https://github.com/apache/incubator-mxnet/pull/16490#issuecomment-542339412
 
 
   @aaronmarkham the python docs header is done through the theme which is 
loading the analytics file that @szha changed 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 commented on issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
stu1130 commented on issue #16491: CI failed due to DockerHub outage
URL: 
https://github.com/apache/incubator-mxnet/issues/16491#issuecomment-542338273
 
 
   looks like it's working now, will close it if my PR passed the CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 edited a comment on issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
stu1130 edited a comment on issue #16491: CI failed due to DockerHub outage
URL: 
https://github.com/apache/incubator-mxnet/issues/16491#issuecomment-542338273
 
 
   looks like it's working now, will close it if my PR pass the CI


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] roywei commented on issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
roywei commented on issue #16491: CI failed due to DockerHub outage
URL: 
https://github.com/apache/incubator-mxnet/issues/16491#issuecomment-542336802
 
 
   tracked here on docker: https://github.com/docker/hub-feedback/issues/1897


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix HybridBlock when hybridize is not called

2019-10-15 Thread GitBox
sxjscience commented on a change in pull request #16465: [Gluon] [Fix] [WIP] 
Fix HybridBlock when hybridize is not called
URL: https://github.com/apache/incubator-mxnet/pull/16465#discussion_r335100583
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -1054,34 +1098,16 @@ def register_op_hook(self, callback, 
monitor_all=False):
 def forward(self, x, *args):
 """Defines the forward computation. Arguments can be either
 :py:class:`NDArray` or :py:class:`Symbol`."""
-flatten_args = _flatten([x] + list(args), 'inputs')[0]
-is_ndarray = None
-ctx = None
-exist_sym_nd = False
-for ele in flatten_args:
-if isinstance(ele, NDArray):
-if is_ndarray is False:
-raise ValueError('In HybridBlock, we do not support mixed 
NDArrays and Symbols'
- ' types for the input.\n'
- 'Received types are: {}.'
- .format([type(ele) for ele in 
flatten_args]))
-is_ndarray = True
-exist_sym_nd = True
-ctx = ele.context
 
 Review comment:
   @leezu I agree that the backward-compatible issue is valid. Let me first 
make it to be backward-compatible. However, this does not fix the issue of the 
cpu, cpu_pinned, cpu_shared combination.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] mxnet-label-bot commented on issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
mxnet-label-bot commented on issue #16491: CI failed due to DockerHub outage
URL: 
https://github.com/apache/incubator-mxnet/issues/16491#issuecomment-542336621
 
 
   Hey, this is the MXNet Label Bot. 
Thank you for submitting the issue! I will try and suggest some labels so 
that the appropriate MXNet community members can help resolve it. 
Here are my recommended label(s): Test, CI, Build


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] stu1130 opened a new issue #16491: CI failed due to DockerHub outage

2019-10-15 Thread GitBox
stu1130 opened a new issue #16491: CI failed due to DockerHub outage
URL: https://github.com/apache/incubator-mxnet/issues/16491
 
 
   some example of CI failure
   
http://jenkins.mxnet-ci.amazon-ml.com/blue/organizations/jenkins/mxnet-validation%2Fcentos-cpu/detail/PR-16490/1/pipeline
   
   The latest status of DockerHub
   https://status.docker.com/pages/533c6539221ae15e3f31


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335095248
 
 

 ##
 File path: src/operator/contrib/transformer.cu
 ##
 @@ -22,12 +22,568 @@
  * \file transformer.cu
  * \brief GPU implementation of the operators used in Transformer
  */
+
+#include 
+#include 
+#include 
+#include 
+
 #include 
 #include "./transformer-inl.h"
+#include "../../common/cuda_utils.h"
 
 namespace mxnet {
 namespace op {
 
+// gemm_switch_fp32accum and the functions called are almost fully copied from:
+// MLPerf v0.6 submission repository from NVIDIA by 
https://github.com/kevinstephano
 
 Review comment:
   This is no longer the case, right?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335094420
 
 

 ##
 File path: src/operator/contrib/transformer.cc
 ##
 @@ -29,6 +29,163 @@
 namespace mxnet {
 namespace op {
 
+DMLC_REGISTER_PARAMETER(InterleavedMatMulParam);
+
+static bool InterleavedMatMulSelfAttQKShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 1);
+  auto qkv_shape = in_shape->at(0);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  out_shape->resize(1);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+mxnet::TShape({params.heads * qkv_shape[1], qkv_shape[0], qkv_shape[0]}));
+  return true;
+}
+
+static bool InterleavedMatMulSelfAttValAttShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) 
{
+  CHECK_EQ(in_shape->size(), 2);
+  auto qkv_shape = in_shape->at(0);
+  auto att_shape = in_shape->at(1);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  CHECK_EQ(att_shape.ndim(), 3);
+  CHECK_EQ(qkv_shape[0], att_shape[1]);
+  CHECK_EQ(qkv_shape[0], att_shape[2]);
+  CHECK_EQ(qkv_shape[2] % 3, 0);
 
 Review comment:
   Could you make some meaningful error messages when the shape does not match 
your expectation?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] sxjscience commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix HybridBlock when hybridize is not called

2019-10-15 Thread GitBox
sxjscience commented on a change in pull request #16465: [Gluon] [Fix] [WIP] 
Fix HybridBlock when hybridize is not called
URL: https://github.com/apache/incubator-mxnet/pull/16465#discussion_r335094454
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -1054,34 +1098,16 @@ def register_op_hook(self, callback, 
monitor_all=False):
 def forward(self, x, *args):
 """Defines the forward computation. Arguments can be either
 :py:class:`NDArray` or :py:class:`Symbol`."""
-flatten_args = _flatten([x] + list(args), 'inputs')[0]
-is_ndarray = None
-ctx = None
-exist_sym_nd = False
-for ele in flatten_args:
-if isinstance(ele, NDArray):
-if is_ndarray is False:
-raise ValueError('In HybridBlock, we do not support mixed 
NDArrays and Symbols'
- ' types for the input.\n'
- 'Received types are: {}.'
- .format([type(ele) for ele in 
flatten_args]))
-is_ndarray = True
-exist_sym_nd = True
-ctx = ele.context
 
 Review comment:
   @leezu It's also possible that, previously cpu_pinned is picked as the 
default argument and after the change, the correct cpu context is picked as the 
default. My point is we need to probably give special treatment of the `cpu, 
cpu_pinned, cpu_shared`. What's your opinion?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335094585
 
 

 ##
 File path: src/operator/contrib/transformer.cc
 ##
 @@ -29,6 +29,163 @@
 namespace mxnet {
 namespace op {
 
+DMLC_REGISTER_PARAMETER(InterleavedMatMulParam);
+
+static bool InterleavedMatMulSelfAttQKShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 1);
+  auto qkv_shape = in_shape->at(0);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  out_shape->resize(1);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+mxnet::TShape({params.heads * qkv_shape[1], qkv_shape[0], qkv_shape[0]}));
+  return true;
+}
+
+static bool InterleavedMatMulSelfAttValAttShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) 
{
+  CHECK_EQ(in_shape->size(), 2);
+  auto qkv_shape = in_shape->at(0);
+  auto att_shape = in_shape->at(1);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  CHECK_EQ(att_shape.ndim(), 3);
+  CHECK_EQ(qkv_shape[0], att_shape[1]);
+  CHECK_EQ(qkv_shape[0], att_shape[2]);
+  CHECK_EQ(qkv_shape[2] % 3, 0);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+mxnet::TShape({qkv_shape[0], qkv_shape[1], qkv_shape[2] / 3}));
+  return true;
+}
+
+static bool InterleavedMatMulEncDecQKShape(const NodeAttrs& attrs,
+   mxnet::ShapeVector* in_shape,
+   mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 2);
+  auto q_shape = in_shape->at(0);
+  auto kv_shape = in_shape->at(1);
+  CHECK_EQ(q_shape.ndim(), 3);
+  CHECK_EQ(kv_shape.ndim(), 3);
+  CHECK_EQ(q_shape[2] * 2, kv_shape[2]);
+  CHECK_EQ(q_shape[1], kv_shape[1]);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+  mxnet::TShape({q_shape[1] * params.heads, q_shape[0], kv_shape[0]}));
+  return true;
+}
+
+static bool InterleavedMatMulEncDecValAttShape(const NodeAttrs& attrs,
+   mxnet::ShapeVector* in_shape,
+   mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 2);
+  auto kv_shape = in_shape->at(0);
+  auto att_shape = in_shape->at(1);
+  CHECK_EQ(kv_shape[0], att_shape[2]);
+  CHECK_EQ(kv_shape[1] * params.heads, att_shape[0]);
 
 Review comment:
   And here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335094502
 
 

 ##
 File path: src/operator/contrib/transformer.cc
 ##
 @@ -29,6 +29,163 @@
 namespace mxnet {
 namespace op {
 
+DMLC_REGISTER_PARAMETER(InterleavedMatMulParam);
+
+static bool InterleavedMatMulSelfAttQKShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 1);
+  auto qkv_shape = in_shape->at(0);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  out_shape->resize(1);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+mxnet::TShape({params.heads * qkv_shape[1], qkv_shape[0], qkv_shape[0]}));
+  return true;
+}
+
+static bool InterleavedMatMulSelfAttValAttShape(const NodeAttrs& attrs,
+mxnet::ShapeVector* in_shape,
+mxnet::ShapeVector* out_shape) 
{
+  CHECK_EQ(in_shape->size(), 2);
+  auto qkv_shape = in_shape->at(0);
+  auto att_shape = in_shape->at(1);
+  CHECK_EQ(qkv_shape.ndim(), 3);
+  CHECK_EQ(att_shape.ndim(), 3);
+  CHECK_EQ(qkv_shape[0], att_shape[1]);
+  CHECK_EQ(qkv_shape[0], att_shape[2]);
+  CHECK_EQ(qkv_shape[2] % 3, 0);
+  SHAPE_ASSIGN_CHECK(*out_shape, 0,
+mxnet::TShape({qkv_shape[0], qkv_shape[1], qkv_shape[2] / 3}));
+  return true;
+}
+
+static bool InterleavedMatMulEncDecQKShape(const NodeAttrs& attrs,
+   mxnet::ShapeVector* in_shape,
+   mxnet::ShapeVector* out_shape) {
+  const auto& params = nnvm::get(attrs.parsed);
+  CHECK_EQ(in_shape->size(), 2);
+  auto q_shape = in_shape->at(0);
+  auto kv_shape = in_shape->at(1);
+  CHECK_EQ(q_shape.ndim(), 3);
+  CHECK_EQ(kv_shape.ndim(), 3);
+  CHECK_EQ(q_shape[2] * 2, kv_shape[2]);
+  CHECK_EQ(q_shape[1], kv_shape[1]);
 
 Review comment:
   Same here


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
ptrendx commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335093967
 
 

 ##
 File path: Makefile
 ##
 @@ -106,7 +106,7 @@ ifeq ($(DEBUG), 1)
 else
CFLAGS += -O3 -DNDEBUG=1
 endif
-CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS)
+CFLAGS += -I$(TPARTYDIR)/mshadow/ -I$(TPARTYDIR)/dmlc-core/include -fPIC 
-I$(NNVM_PATH)/include -I$(DLPACK_PATH)/include -I$(TPARTYDIR)/tvm/include 
-Iinclude $(MSHADOW_CFLAGS) -I$(TPARTYDIR)/cutlass/
 
 Review comment:
   Remove cutlass from here.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] leezu commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix HybridBlock when hybridize is not called

2019-10-15 Thread GitBox
leezu commented on a change in pull request #16465: [Gluon] [Fix] [WIP] Fix 
HybridBlock when hybridize is not called
URL: https://github.com/apache/incubator-mxnet/pull/16465#discussion_r335090536
 
 

 ##
 File path: python/mxnet/gluon/block.py
 ##
 @@ -1054,34 +1098,16 @@ def register_op_hook(self, callback, 
monitor_all=False):
 def forward(self, x, *args):
 """Defines the forward computation. Arguments can be either
 :py:class:`NDArray` or :py:class:`Symbol`."""
-flatten_args = _flatten([x] + list(args), 'inputs')[0]
-is_ndarray = None
-ctx = None
-exist_sym_nd = False
-for ele in flatten_args:
-if isinstance(ele, NDArray):
-if is_ndarray is False:
-raise ValueError('In HybridBlock, we do not support mixed 
NDArrays and Symbols'
- ' types for the input.\n'
- 'Received types are: {}.'
- .format([type(ele) for ele in 
flatten_args]))
-is_ndarray = True
-exist_sym_nd = True
-ctx = ele.context
 
 Review comment:
   As the previous implementation hasn't enforced all contexts being equal, we 
shouldn't start picking a different array to determine the context. As you 
stated above, it's valid to use a mix of  `cpu, cpu_pinned, cpu_shared` 
contexts.
   For example, after your change, `cpu_pinned` or `cpu_shared` may be picked 
as default context instead of `cpu` if the user passed a  `cpu_pinned` or 
`cpu_shared`  as last argument. The extra overhead could cause a performance 
regression (all parameters will be made available under default context).
   No need to risk this given there is no advantage?


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16490: Correct Google Analytics Tracker

2019-10-15 Thread GitBox
aaronmarkham commented on issue #16490: Correct Google Analytics Tracker
URL: https://github.com/apache/incubator-mxnet/pull/16490#issuecomment-542327294
 
 
   It is setup to use it here: 
https://github.com/apache/incubator-mxnet/blame/master/docs/static_site/src/_includes/google-analytics.html
   
   I'm not sure why there are two analytics files. Maybe this one should be 
deleted. I think the config is setup correctly because if you view source the 
ID is `-1`.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] aaronmarkham commented on issue #16490: Correct Google Analytics Tracker

2019-10-15 Thread GitBox
aaronmarkham commented on issue #16490: Correct Google Analytics Tracker
URL: https://github.com/apache/incubator-mxnet/pull/16490#issuecomment-542323233
 
 
   Hi @szha - So the config for the ID should be here:
   
https://github.com/apache/incubator-mxnet/blob/master/docs/static_site/src/_config_prod.yml
   It is there, so any mention of the tracker ID should be using the variable, 
not be statically defined.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch szha-patch-1 created (now da0bf19)

2019-10-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch szha-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at da0bf19  Correct Google Analytics Tracker

No new revisions were added by this update.



[incubator-mxnet] branch szha-patch-1 created (now da0bf19)

2019-10-15 Thread zhasheng
This is an automated email from the ASF dual-hosted git repository.

zhasheng pushed a change to branch szha-patch-1
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


  at da0bf19  Correct Google Analytics Tracker

No new revisions were added by this update.



[GitHub] [incubator-mxnet] szha opened a new pull request #16490: Correct Google Analytics Tracker

2019-10-15 Thread GitBox
szha opened a new pull request #16490: Correct Google Analytics Tracker
URL: https://github.com/apache/incubator-mxnet/pull/16490
 
 
   ## Description ##
   (Brief description on what this PR is about)
   
   ## Checklist ##
   ### Essentials ###
   Please feel free to remove inapplicable items for your PR.
   - [ ] The PR title starts with [MXNET-$JIRA_ID], where $JIRA_ID refers to 
the relevant [JIRA issue](https://issues.apache.org/jira/projects/MXNET/issues) 
created (except PRs with tiny changes)
   - [ ] Changes are complete (i.e. I finished coding on this PR)
   - [ ] All changes have test coverage:
   - Unit tests are added for small changes to verify correctness (e.g. adding 
a new operator)
   - Nightly tests are added for complicated/long-running ones (e.g. changing 
distributed kvstore)
   - Build tests will be added for build configuration changes (e.g. adding a 
new build option with NCCL)
   - [ ] Code is well-documented: 
   - For user-facing API changes, API doc string has been updated. 
   - For new C++ functions in header files, their functionalities and arguments 
are documented. 
   - For new examples, README.md is added to explain the what the example does, 
the source of the dataset, expected performance on test set and reference to 
the original paper if applicable
   - Check the API doc at 
https://mxnet-ci-doc.s3-accelerate.dualstack.amazonaws.com/PR-$PR_ID/$BUILD_ID/index.html
   - [ ] To the my best knowledge, examples are either not affected by this 
change, or have been fixed to be compatible with this change
   
   ### Changes ###
   - [ ] Feature1, tests, (and when applicable, API doc)
   - [ ] Feature2, tests, (and when applicable, API doc)
   
   ## Comments ##
   - If this change is a backward incompatible change, why must this change be 
made.
   - Interesting edge cases to note here
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #16481: Broken / Incorrect Links - Gluon: from experiment to deployment (DOCS)

2019-10-15 Thread GitBox
TEChopra1000 commented on issue #16481: Broken / Incorrect Links - Gluon: from 
experiment to deployment (DOCS)
URL: 
https://github.com/apache/incubator-mxnet/issues/16481#issuecomment-542310643
 
 
   I'm going to move all broken links from the Python Tutorials into one 
ticket. Closing this ticket. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 closed issue #16481: Broken / Incorrect Links - Gluon: from experiment to deployment (DOCS)

2019-10-15 Thread GitBox
TEChopra1000 closed issue #16481: Broken / Incorrect Links - Gluon: from 
experiment to deployment (DOCS)
URL: https://github.com/apache/incubator-mxnet/issues/16481
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 closed issue #16480: Broken Links - Logistic Regression Explained (DOCS)

2019-10-15 Thread GitBox
TEChopra1000 closed issue #16480: Broken Links - Logistic Regression Explained 
(DOCS)
URL: https://github.com/apache/incubator-mxnet/issues/16480
 
 
   


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] TEChopra1000 commented on issue #16480: Broken Links - Logistic Regression Explained (DOCS)

2019-10-15 Thread GitBox
TEChopra1000 commented on issue #16480: Broken Links - Logistic Regression 
Explained (DOCS)
URL: 
https://github.com/apache/incubator-mxnet/issues/16480#issuecomment-542310176
 
 
   I'm going to move all of the broken links from the Python Tutorials section 
into one ticket. Closing this one. 


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] hetong007 commented on a change in pull request #16487: Fix learning rate scheduler being unexpectedly overwritten by optimizer's default value

2019-10-15 Thread GitBox
hetong007 commented on a change in pull request #16487: Fix learning rate 
scheduler being unexpectedly overwritten by optimizer's default value
URL: https://github.com/apache/incubator-mxnet/pull/16487#discussion_r335070091
 
 

 ##
 File path: python/mxnet/optimizer/optimizer.py
 ##
 @@ -63,8 +63,10 @@ class Optimizer(object):
 clip_gradient : float, optional, default None
 Clip the gradient by projecting onto the box ``[-clip_gradient, 
clip_gradient]``.
 
-learning_rate : float, optional, default 0.01
-The initial learning rate.
+learning_rate : float, optional, default None
+The initial learning rate. If None, the optimization will use the
+learning rate from ``lr_scheduler``. If not None, it will overwrite
 
 Review comment:
   It is technically `None` by default, but effectively it is `0.01`. I'll 
include it in the params' doc.


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[GitHub] [incubator-mxnet] Caenorst commented on a change in pull request #16408: Add MXNet Ops for fast multihead attention

2019-10-15 Thread GitBox
Caenorst commented on a change in pull request #16408: Add MXNet Ops for fast 
multihead attention
URL: https://github.com/apache/incubator-mxnet/pull/16408#discussion_r335057869
 
 

 ##
 File path: .gitmodules
 ##
 @@ -26,3 +26,6 @@
 [submodule "3rdparty/nvidia_cub"]
path = 3rdparty/nvidia_cub
url = https://github.com/NVlabs/cub.git
+[submodule "3rdparty/cutlass"]
 
 Review comment:
   Update: We decided to drop Cutlass from now, there are some cases on which 
Cublas is actually working better and it make the PR more simple


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


[incubator-mxnet] branch master updated (f9359c3 -> 67e1e68)

2019-10-15 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/parameter.py| 21 -
 src/operator/contrib/reset_arrays-inl.h| 92 ++
 src/operator/contrib/reset_arrays.cc   | 74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} | 18 +++--
 tests/python/unittest/test_gluon.py| 66 +---
 5 files changed, 252 insertions(+), 19 deletions(-)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[incubator-mxnet] branch master updated (f9359c3 -> 67e1e68)

2019-10-15 Thread ptrendx
This is an automated email from the ASF dual-hosted git repository.

ptrendx pushed a change to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-mxnet.git.


from f9359c3  Fix flakey pylint CI failures (#16462)
 add 67e1e68  Aggregated zero grad (#16446)

No new revisions were added by this update.

Summary of changes:
 python/mxnet/gluon/parameter.py| 21 -
 src/operator/contrib/reset_arrays-inl.h| 92 ++
 src/operator/contrib/reset_arrays.cc   | 74 +
 .../contrib/{multi_lars.cu => reset_arrays.cu} | 18 +++--
 tests/python/unittest/test_gluon.py| 66 +---
 5 files changed, 252 insertions(+), 19 deletions(-)
 create mode 100644 src/operator/contrib/reset_arrays-inl.h
 create mode 100644 src/operator/contrib/reset_arrays.cc
 copy src/operator/contrib/{multi_lars.cu => reset_arrays.cu} (69%)



[GitHub] [incubator-mxnet] ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap

2019-10-15 Thread GitBox
ptrendx edited a comment on issue #15589: [Discussion] 1.6.0 Roadmap
URL: 
https://github.com/apache/incubator-mxnet/issues/15589#issuecomment-526373840
 
 
   We have multiple improvements to BERT inference and training speed that we 
would like to be part of 1.6 release:
- [x] Softmax optimizations (#15545 )
- [ ] Pointwise fusion for GPU (#15167 )
- [ ] Eliminate common expressions (#15657 )
- [x] Bias speed improvements (#16039 )
- [ ] Aggregated AdamW optimizer (#16398)
- [x] Aggregated zeroing of the gradients (#16446)
- [x] Aggregated sum of squares operator (also used in LARS, #16122)
- [x] Embedding gradient optimization (#16355)
- [ ] Faster multihead attention operator (#16408)


This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
us...@infra.apache.org


With regards,
Apache Git Services


  1   2   >