Re: Report of MXNet NumPy Project Status

2019-08-03 Thread Chaitanya Bapat
Thanks Jun for the summary. Apologies for delayed response.

Having skimmed through a bunch of PRs revolving around "Numpy-compatibility
Infra" (#15581 ,
#14758 , #14924
)

I had 3 questions.
1. Looks like by making Numpy compatible APIs, MXNet would be more
"usable", "easy-to-use" or "user-friendly". Having MXNet's Numpy version
seems to be one big bet in our roadmap. So now my question is, would this
be an addition or a replacement?
So instead of mx.nd.zeros would be discontinuing with that and rather use
mx.np.zeros?

2. Are we going to deprecate our mx.nd.* ops in 2.0 or upcoming releases?
The reason why I'm asking this is - I have a pending PR on mx.nd.cumsum op
(but now that Hao's #15581 has mx.np.cumsum in the pipeline should I close
my PR if it’s not going to be used in future?

3. I understand making our operators "numpy-compatible" is an urgent need
and will be greatly appreciated by the users/community.
But going forward, are there going to be 2 ways of using MXNet operators?
or is it going to be the de-facto method.
I would assume we should only have one (to prevent confusing our users) and
ensure all our existing ops are numpy-compatible.

Thanks once again!
Chai



On Wed, 22 May 2019 at 09:25, Junru Shao  wrote:

>  Nice progress Jun!
>
> On Wed, May 22, 2019 at 12:12 AM Jun Wu  wrote:
>
> > Dear Community,
> >
> > A few months ago, we submitted this RFC
> >  proposing
> > introducing NumPy-compatible coding experience into MXNet. As it has been
> > some time since the proposal, we would like to share the progress with
> the
> > community and listen to feedbacks and suggestions to enhance technical
> > implementation as well as the way the project is operated.
> >
> > We set our first milestone by tackling the problem of MXNet not
> supporting
> > scalar and zero-size tensors. Last month, we submitted the PR
> >  providing the
> > infrastructure to support those two types of tensors in MXNet. This work
> > has affected almost every file and all language bindings in MXNet
> codebase.
> > It would be impossible to provide a complete solution hadn't there any
> > contributions from many MXNet developers across different organizations.
> >
> > With the infrastructure of supporting scalar and zero-size tensors, we
> are
> > currently working on implementing NumPy operators in MXNet. We created a
> > list of operators <
> https://github.com/apache/incubator-mxnet/issues/14327>
> > to be implemented from the D2L book , and hope that
> we
> > will be able to provide full NumPy operator coverage for the book by the
> > end of next month.
> >
> > In the future, we plan to provide NumPy operator support for GluonCV
> >  and GluonNLP
> > . We also intend to explore the
> > opportunities of extending our work to support the libraries that heavily
> > depend on NumPy, not only from the deep learning world, but also a
> broader
> > data science community, where the techniques employed by deep learning,
> > such as auto differentiation, symbolic programming, GPU computing, and so
> > forth can be beneficial.
> >
> > Thank you very much for your time to read this email and care about our
> > efforts on making MXNet a super user-friendly deep learning framework. We
> > look forward to your comments, suggestions and contributions for this
> > project.
> >
> > Best,
> > Developers of MXNet NumPy Project
> >
> > References
> > [1] Development branch:
> > https://github.com/apache/incubator-mxnet/tree/numpy
> > [2] PR for supporting scalar and zero-size tensors:
> > https://github.com/apache/incubator-mxnet/pull/14661
> > [3] First batch of NumPy operators to be implemented:
> > https://github.com/apache/incubator-mxnet/issues/14327
> > [4] The D2L book: https://github.com/d2l-ai/d2l-en
> > [5] GluonCV: https://github.com/dmlc/gluon-cv
> > [6] GluonNLP: https://github.com/dmlc/gluon-nlp
> >
>


-- 
*Chaitanya Prakash Bapat*
*+1 (973) 953-6299*

[image: https://www.linkedin.com//in/chaibapat25]
[image: https://www.facebook.com/chaibapat]
[image:
https://twitter.com/ChaiBapchya] [image:
https://www.linkedin.com//in/chaibapat25]



Re: Report of MXNet NumPy Project Status

2019-05-22 Thread Junru Shao
 Nice progress Jun!

On Wed, May 22, 2019 at 12:12 AM Jun Wu  wrote:

> Dear Community,
>
> A few months ago, we submitted this RFC
>  proposing
> introducing NumPy-compatible coding experience into MXNet. As it has been
> some time since the proposal, we would like to share the progress with the
> community and listen to feedbacks and suggestions to enhance technical
> implementation as well as the way the project is operated.
>
> We set our first milestone by tackling the problem of MXNet not supporting
> scalar and zero-size tensors. Last month, we submitted the PR
>  providing the
> infrastructure to support those two types of tensors in MXNet. This work
> has affected almost every file and all language bindings in MXNet codebase.
> It would be impossible to provide a complete solution hadn't there any
> contributions from many MXNet developers across different organizations.
>
> With the infrastructure of supporting scalar and zero-size tensors, we are
> currently working on implementing NumPy operators in MXNet. We created a
> list of operators 
> to be implemented from the D2L book , and hope that we
> will be able to provide full NumPy operator coverage for the book by the
> end of next month.
>
> In the future, we plan to provide NumPy operator support for GluonCV
>  and GluonNLP
> . We also intend to explore the
> opportunities of extending our work to support the libraries that heavily
> depend on NumPy, not only from the deep learning world, but also a broader
> data science community, where the techniques employed by deep learning,
> such as auto differentiation, symbolic programming, GPU computing, and so
> forth can be beneficial.
>
> Thank you very much for your time to read this email and care about our
> efforts on making MXNet a super user-friendly deep learning framework. We
> look forward to your comments, suggestions and contributions for this
> project.
>
> Best,
> Developers of MXNet NumPy Project
>
> References
> [1] Development branch:
> https://github.com/apache/incubator-mxnet/tree/numpy
> [2] PR for supporting scalar and zero-size tensors:
> https://github.com/apache/incubator-mxnet/pull/14661
> [3] First batch of NumPy operators to be implemented:
> https://github.com/apache/incubator-mxnet/issues/14327
> [4] The D2L book: https://github.com/d2l-ai/d2l-en
> [5] GluonCV: https://github.com/dmlc/gluon-cv
> [6] GluonNLP: https://github.com/dmlc/gluon-nlp
>


Re: Report of MXNet NumPy Project Status

2019-05-22 Thread Pedro Larroy
Thanks, that's a nice summary. Great job and good to know the
progress. I think we can do some exciting stuff in terms of parsing
the Python AST and converting to a computational graph. Maybe we could
brainstorm on that further on the linked ticket.

On Wed, May 22, 2019 at 12:12 AM Jun Wu  wrote:
>
> Dear Community,
>
> A few months ago, we submitted this RFC
>  proposing
> introducing NumPy-compatible coding experience into MXNet. As it has been
> some time since the proposal, we would like to share the progress with the
> community and listen to feedbacks and suggestions to enhance technical
> implementation as well as the way the project is operated.
>
> We set our first milestone by tackling the problem of MXNet not supporting
> scalar and zero-size tensors. Last month, we submitted the PR
>  providing the
> infrastructure to support those two types of tensors in MXNet. This work
> has affected almost every file and all language bindings in MXNet codebase.
> It would be impossible to provide a complete solution hadn't there any
> contributions from many MXNet developers across different organizations.
>
> With the infrastructure of supporting scalar and zero-size tensors, we are
> currently working on implementing NumPy operators in MXNet. We created a
> list of operators 
> to be implemented from the D2L book , and hope that we
> will be able to provide full NumPy operator coverage for the book by the
> end of next month.
>
> In the future, we plan to provide NumPy operator support for GluonCV
>  and GluonNLP
> . We also intend to explore the
> opportunities of extending our work to support the libraries that heavily
> depend on NumPy, not only from the deep learning world, but also a broader
> data science community, where the techniques employed by deep learning,
> such as auto differentiation, symbolic programming, GPU computing, and so
> forth can be beneficial.
>
> Thank you very much for your time to read this email and care about our
> efforts on making MXNet a super user-friendly deep learning framework. We
> look forward to your comments, suggestions and contributions for this
> project.
>
> Best,
> Developers of MXNet NumPy Project
>
> References
> [1] Development branch: https://github.com/apache/incubator-mxnet/tree/numpy
> [2] PR for supporting scalar and zero-size tensors:
> https://github.com/apache/incubator-mxnet/pull/14661
> [3] First batch of NumPy operators to be implemented:
> https://github.com/apache/incubator-mxnet/issues/14327
> [4] The D2L book: https://github.com/d2l-ai/d2l-en
> [5] GluonCV: https://github.com/dmlc/gluon-cv
> [6] GluonNLP: https://github.com/dmlc/gluon-nlp


Report of MXNet NumPy Project Status

2019-05-22 Thread Jun Wu
Dear Community,

A few months ago, we submitted this RFC
 proposing
introducing NumPy-compatible coding experience into MXNet. As it has been
some time since the proposal, we would like to share the progress with the
community and listen to feedbacks and suggestions to enhance technical
implementation as well as the way the project is operated.

We set our first milestone by tackling the problem of MXNet not supporting
scalar and zero-size tensors. Last month, we submitted the PR
 providing the
infrastructure to support those two types of tensors in MXNet. This work
has affected almost every file and all language bindings in MXNet codebase.
It would be impossible to provide a complete solution hadn't there any
contributions from many MXNet developers across different organizations.

With the infrastructure of supporting scalar and zero-size tensors, we are
currently working on implementing NumPy operators in MXNet. We created a
list of operators 
to be implemented from the D2L book , and hope that we
will be able to provide full NumPy operator coverage for the book by the
end of next month.

In the future, we plan to provide NumPy operator support for GluonCV
 and GluonNLP
. We also intend to explore the
opportunities of extending our work to support the libraries that heavily
depend on NumPy, not only from the deep learning world, but also a broader
data science community, where the techniques employed by deep learning,
such as auto differentiation, symbolic programming, GPU computing, and so
forth can be beneficial.

Thank you very much for your time to read this email and care about our
efforts on making MXNet a super user-friendly deep learning framework. We
look forward to your comments, suggestions and contributions for this
project.

Best,
Developers of MXNet NumPy Project

References
[1] Development branch: https://github.com/apache/incubator-mxnet/tree/numpy
[2] PR for supporting scalar and zero-size tensors:
https://github.com/apache/incubator-mxnet/pull/14661
[3] First batch of NumPy operators to be implemented:
https://github.com/apache/incubator-mxnet/issues/14327
[4] The D2L book: https://github.com/d2l-ai/d2l-en
[5] GluonCV: https://github.com/dmlc/gluon-cv
[6] GluonNLP: https://github.com/dmlc/gluon-nlp