infra migration

2017-05-01 Thread Dominic Divakaruni
(trying this from my personal email, since the previous one didnt go out)

Nudging this to the top of your inbox. Relocating infrastructure to its
Apache home is one of the first steps in the incubator and is long overdue.
Contributors and mentors please voice your opinion and move this migration
forward.  I am restating the relocation work list below with two additional
items that have been suggested. Please add more that may be missing..



1.   Code relocation from GitHub to Apache.

a.   What are the steps involved to enable the transfer? Any associated
timelines that the community needs to be aware of? – *Apache Mentors *we
need your input

b.   MXNet has cross-dependencies on several DMLC foundational
sub-modules like core and ps-lite. How does a migration impact such cross
links to non-apache projects? Should we merge all the MXNet dependent
modules into a single repository or keep them as separate source
repositories at Apache? How do projects handle code dependencies/ links
with other non-apache projects? –*Apache Mentors *we need your input

c.   What are the project’s build server requirements?*Committers *can
you please comment?* Apache Mentors, *can one off infrastructure needs be
addressed?

d.   *Contributors*, please raise any additional questions or concerns
you may have.

2.   Moving from mxnet.io to mxnet.apache.org

a.   AWS may be willing to sponsor the development of a new website
with Apache branding and guidelines with collective requirements of the
community!*Contributors/Mentors* what do you think? Are there any other
potential sponsors for a new website?

b.  What happens to search links and bookmarks when the website moves
to an Apache URL? *Mentors*, do you have some guidance on this?

3.   Moving project communication away the current Slack team
(apache-mxnet) to the official Apache Slack team per Henri’s recommendation.

a.   There are some active discussions on slack. One of them has to do
with the docs improvement effort. Suggestion: should the move occur at the
conclusion of the current docs project? Maybe doing this along with the
code base relocation makes sense? *Contributors, Mentors,*thoughts?







*From: *"Divakaruni, Dominic" 
*Date: *Wednesday, April 26, 2017 at 4:54 PM
*To: *"dev@mxnet.incubator.apache.org" 
*Subject: *infra migration



Hello all,

It seems that the next step for MXNet is to migrate the infrastructure from
GitHub, and so I thought I'd tee up a few questions to start a discussion.
Please weigh in with your thoughts, comments and feedback.



-  What infrastructure services should move to Apache? What are the
steps associated with each and are there any timeframe requirements that
the community needs to work towards?

-  MXNet has particular Jenkins build requirements. Can someone
layout these needs? Can Apache support these unique requirements?

-  MXNet has cross-dependencies on several DMLC foundational
sub-modules like core and ps-lite. How does a migration impact such cross
links to non-apache projects? Should we merge all the MXNet dependent
modules into a single repository or keep them as separate source
repositories at Apache?

-  What happens to search links and bookmarks when the website
moves to an apache url?



Thx,

Dom



-- 


Dominic Divakaruni
206.475.9200 Cell


PyPI for MXNet 0.9.5

2017-05-03 Thread Dominic Divakaruni
Much thanks to szha@ for this!

Python pip install packages for OSX, Linux for MKL and Cuda 7.5 and 8 are
now available! As an added bonus, the CU75 and CU80 packages have cuDNN v6
bundled!

https://pypi.python.org/pypi/mxnet


new logo choices

2017-05-04 Thread Dominic Divakaruni
Apache MXNet has new logo design contributions. Check em out at let us know
what you think!

https://github.com/dmlc/mxnet/issues/6103


Re: Request an invitation

2017-06-22 Thread Dominic Divakaruni
Hi ysh329,
I've added you on.

check out http://mxnet.io/how_to/new_op.html

and start a thread on this mailing list if you run into issues

On Wed, Jun 21, 2017 at 8:21 PM,  wrote:

> Hello, my dear MXNet brothers!
> I request an invitation to the channel to MXNet Slack channel. My email is
> Slack email: ysh...@sina.com .
> I want to implement a depth-wise conv operator. Is there any guide about
> how to accomplish a operator!
> Besides, I want to be a real MXNet contributor, not only contribute small
> docs mistakes.
> Thanks a lot!




-- 


Dominic Divakaruni
206.475.9200 Cell


Re: Board report due

2017-07-04 Thread Dominic Divakaruni
Hello all,
Hope those of us in the US are having a great 4th of July! I've taken a
stab at a draft of the report. Section 6 needs to be updated. Please pitch
in with your updates

1. Your project name: Apache MXNet



2. A brief description of your project, which assumes no knowledge of
the project or necessarily of its field: MXNet is an open-source deep
learning framework that allows you to define, train, and deploy deep neural
networks on a wide array of devices, from cloud infrastructure to mobile
devices. It is highly scalable, allowing for fast model training, and
supports a flexible programming model and multiple languages. MXNet allows
you to mix symbolic and imperative programming flavors to maximize both
efficiency and productivity. MXNet is built on a dynamic dependency
scheduler that automatically parallelizes both symbolic and imperative
operations on the fly. A graph optimization layer on top of that makes
symbolic execution fast and memory efficient. The MXNet library is portable
and lightweight, and it scales to multiple GPUs and multiple machines.

3. A list of the three most important issues to address in the move
towards graduation.

3.1.  Migrate code (GitHub) and website to Apache.

3.2.  Grow the community:

3.2.1. Expand reference material including – new machine learning
research published based on MXNet, tutorials, documented use cases and
architecture documentation.

3.2.2. Improving user-experience –for example improved error messages

3.2.3. Improved support for various programming languages

3.2.4. Establish a dependable, Apache-way consistent release process.

3.3.  Features:

3.3.1. Capability (such as low precision support and quantization) that
allows models to run efficiently on mobile and edge devices. Integrations
with mobile and edge device acceleration drivers.

3.3.2. Accelerate performance on CPUs and GPUs.



4. Any issues that the Incubator PMC or ASF Board might wish/need to be
aware of:

None.



5. How has the community developed since the last report.

5.1.  On 5/27 MXNet published a comprehensive edit and makeover of the
documentation including tutorials, how-to’s, APIs and architecture guides.
This was a broad effort that involved over 40 contributors.

5.2.  The PMC voted in a new contributor who has been helping with the code
migration and setup of the test infrastructure. We are making slow but
steady progress towards getting the GitHub code migrated. The target date
for migration is 7/17. Website migration will happen next.

5.3.  Slack and dev@ are being used more actively.

6. How has the project developed since the last report.

6.1.  Since the last report 42 authors have pushed 326 commits to master.

6.2.  (Previous Update)  On master, 502 files have changed and there have
been 26,246 additions and 12,188 deletions. Count of Closed Issues = 62,
Count of new Issues = 146, Count of Merged Pull Requests = 161, Count of
Proposed Pull Requests = 27.

6.3.  (Previous Update) The API Documentation has improved.

6.4.  (Previous Update) More features (e.g. operators) requested by the
user community has been added.

6.5.  (Previous update) Hardware acceleration like cuDDN6 integration and
MKL ML package integration was completed.

6.6.  (Previous Update) A new Perl language binding for MXNet was added.



7. How does the podling rate their own maturity: Maturity = Low.


On Mon, Jul 3, 2017 at 11:39 PM, Henri Yandell  wrote:

> In case the relentless automated pinging hasn't given it away, we've a
> board report due.
>
> Hen
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: Board report due

2017-07-05 Thread Dominic Divakaruni
I've made some updated and also posted to this
https://docs.google.com/document/d/1PGhs96klZB6DXhpK9_biPh4-aCm8-bWwFzexnOW_GMA/edit?usp=sharing

1. Your project name: Apache MXNet

2. A brief description of your project, which assumes no knowledge of
the project or necessarily of its field

MXNet is an open-source deep learning framework that allows you to define,
train, and deploy deep neural networks on a wide array of devices, from
cloud infrastructure to mobile devices. It is highly scalable, allowing for
fast model training, and supports a flexible programming model and multiple
languages. MXNet allows you to mix symbolic and imperative programming
flavors to maximize both efficiency and productivity. MXNet is built on a
dynamic dependency scheduler that automatically parallelizes both symbolic
and imperative operations on the fly. A graph optimization layer on top of
that makes symbolic execution fast and memory efficient. The MXNet library
is portable and lightweight, and it scales to multiple GPUs and multiple
machines.

3. A list of the three most important issues to address in the move
towards graduation.

3.1.  Migrate code (GitHub) and website to Apache.

3.2.  Grow the community:

3.2.1. Expand reference material including – new machine learning
research published based on MXNet, tutorials, documented use cases and
architecture documentation.

3.2.2. Improving user-experience –for example improved error messages

3.2.3. Improved support for various programming languages

3.2.4. Establish a dependable, Apache-way consistent release process.

3.3.  Features:

3.3.1. Capability (such as low precision support and quantization) that
allows models to run efficiently on mobile and edge devices. Integrations
with mobile and edge device acceleration drivers.

3.3.2. Accelerate performance on CPUs and GPUs.

4. Any issues that the Incubator PMC or ASF Board might wish/need to be
aware of: None.

5. How has the community developed since the last report.

5.1.  On 5/27 MXNet published a comprehensive edit and makeover of the
documentation including tutorials, how-to’s, APIs and architecture guides.
This was a broad effort that involved over 40 contributors.

5.2.  The PMC voted in a new contributor who has been helping with the code
migration and setup of the test infrastructure. We are making slow but
steady progress towards getting the GitHub code migrated. The target date
for migration is 7/17. Website migration will happen next.

5.3.  Slack and dev@ are being used more actively.

5.4.  Two presentations/workshops on Apache MXNet at the O’Reilly AI Conf
on 6/27 and 6/28

5.5.  A new blog post published on 6/23 showing users how to Build a
Real-time Object Classification System with ApacheMXNet on Raspberry Pi.
https://aws.amazon.com/blogs/ai/build-a-real-time-object-classification-system-with-apache-mxnet-on-raspberry-pi/



6. How has the project developed since the last report.

6.1.  Since the last report 42 authors have pushed 326 commits to master.

6.2.  Documentation- Architecture guides, How To’s, Tutorials, and APIs
have been improved.

6.3.  More features (e.g. operators) requested by the user community has
been added.

6.4.   A new Perl language binding for MXNet was added.

7. How does the podling rate their own maturity. Maturity = Low.


On Wed, Jul 5, 2017 at 7:24 AM, Suneel Marthi  wrote:

> Dom,
>
> Its much easier to comment/modify if u created a google doc and send a
> editable link out. Please do that.
>
> On Tue, Jul 4, 2017 at 6:06 PM, Dominic Divakaruni <
> dominic.divakar...@gmail.com> wrote:
>
> > Hello all,
> > Hope those of us in the US are having a great 4th of July! I've taken a
> > stab at a draft of the report. Section 6 needs to be updated. Please
> pitch
> > in with your updates
> >
> > 1. Your project name: Apache MXNet
> >
> >
> >
> > 2. A brief description of your project, which assumes no knowledge of
> > the project or necessarily of its field: MXNet is an open-source deep
> > learning framework that allows you to define, train, and deploy deep
> neural
> > networks on a wide array of devices, from cloud infrastructure to mobile
> > devices. It is highly scalable, allowing for fast model training, and
> > supports a flexible programming model and multiple languages. MXNet
> allows
> > you to mix symbolic and imperative programming flavors to maximize both
> > efficiency and productivity. MXNet is built on a dynamic dependency
> > scheduler that automatically parallelizes both symbolic and imperative
> > operations on the fly. A graph optimization layer on top of that makes
> > symbolic execution fast and memory efficient. The MXNet library is
> portable
> > and lightweight, and it scales to multiple GPUs and multiple machin

Re: Podling Report Reminder - July 2017

2017-07-05 Thread Dominic Divakaruni
bator/July2017
>
> Note: This is manually populated. You may need to wait a little before
> this page is created from a template.
>
> Mentors
> ---
>
> Mentors should review reports for their project(s) and sign them off on
> the Incubator wiki page. Signing off reports shows that you are
> following the project - projects that are not signed may raise alarms
> for the Incubator PMC.
>
> Incubator PMC
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: MXNet -> Apache Migration proposal

2017-07-07 Thread Dominic Divakaruni
t;
> > > > > >   Wiki, stars, watchers, webhooks, services, deploy keys
> > > > > >   7.
> > > > > >
> > > > > >What will my fork be associated with after migration?
> > > > > >1.
> > > > > >
> > > > > >   It will remain associated with the transferred repository
> > > > > >   8.
> > > > > >
> > > > > >Will I have to change all references to
> > > > http://github.com/dmlc/mxnet
> > > > > ?
> > > > > >1.
> > > > > >
> > > > > >   All links to http://github.com/dmlc/mxnet will
> automatically
> > > be
> > > > > >   redirected to new location when issuing `git clone`, `git
> > > fetch`,
> > > > > `git
> > > > > >   push`, etc, (as long as we don’t create another “mxnet”
> > > > repository
> > > > > under
> > > > > >   DMLC). However, to avoid confusion, you can change the
> links
> > > > where
> > > > > >   possible, and change remote: `git remote set-url origin
> > > new_url`
> > > > > >   9.
> > > > > >
> > > > > >How do I gain write access to the repo?
> > > > > >1.
> > > > > >
> > > > > >   First, you need to be a committer. Then use
> > > > > >   https://gitbox.apache.org/setup/ <
> https://gitbox.apache.org/
> > > > setup/
> > > > > >
> > > > > >   to associate the Apache and GitHub accounts. Note that all
> > > > > committers will
> > > > > >   need to enable 2-factor authentication on GitHub
> > > > > >   10.
> > > > > >
> > > > > >Are we also moving mxnet CI? If so, what is the new location?
> > Will
> > > > > >nightly tests continue to run? How can I add new tests?
> > > > > >1.
> > > > > >
> > > > > >   We will rely on Apache’s build server to run our builds.
> > > > > >   2.
> > > > > >
> > > > > >   It will first only run unit tests for PRs and merges. Tests
> > can
> > > > be
> > > > > >   added following the structure setup in
> > > > > >   https://github.com/dmlc/mxnet/blob/master/Jenkinsfile
> > > > > >   <https://github.com/dmlc/mxnet/blob/master/Jenkinsfile> .
> > > > > >   3.
> > > > > >
> > > > > >   Nightly tests are currently running at
> > > > http://jenkins-master-elb-
> > > > > >   1979848568.us-east-1.elb.amazonaws.com/
> > > > > >   <http://jenkins-master-elb-1979848568.us-east-1.elb.
> > > > amazonaws.com/
> > > > > >
> > > > > >   and will gradually run in Apache’s build server too. There,
> > we
> > > > > will provide
> > > > > >   artifacts such as pip wheels and source packages for the
> > > > community
> > > > > to test.
> > > > > >   1.
> > > > > >
> > > > > >  Automated releases will happen on
> > > http://jenkins-master-elb-
> > > > > >  1979848568.us-east-1.elb.amazonaws.com/
> > > > > >  <http://jenkins-master-elb-1979848568.us-east-1.elb.
> > > > > amazonaws.com/>
> > > > > >  as Apache’s build doesn’t support key storage.
> > > > > >  11.
> > > > > >
> > > > > >Is mxnet.io moving too?
> > > > > >1.
> > > > > >
> > > > > >   For some time we will have both mxnet.apache.org and
> > mxnet.io
> > > > > >   hosting the docs. When we are confident that
> > mxnet.apache.org
> > > is
> > > > > >   stable, we will redirect mxnet.io to there.
> > > > > >
> > > > > >
> > > > > > Link on GitHub repo transfers: https://help.github.com/
> > > > > > articles/about-repository-transfers/
> > > > > >
> > > > > > Feel free to ask any other questions.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Wed, Jun 7, 2017 at 12:53 PM, Ly Nguyen 
> > > > wrote:
> > > > > >
> > > > > >> I’ve documented the detailed steps below on the process of
> > migrating
> > > > > >> MXNet -> Apache for open feedback and discussion.
> > > > > >>
> > > > > >> Essentially Amazon will be providing the GPU build slaves to be
> > > hooked
> > > > > >> into Apache’s Jenkins build Master. We’ll first make sure that
> > > Apache
> > > > > can
> > > > > >> build a fork of MXNet, before officially transferring ownership
> of
> > > the
> > > > > >> MXNet repo.
> > > > > >>
> > > > > >> Steps to migration:
> > > > > >> 1.  Provide Apache with Linux slaves & slave tags
> > > > > >> a.  Provide Apache with slave configuration (tags, remote
> root
> > > > dir,
> > > > > >> etc.)
> > > > > >> b.  Spin up 6 slaves
> > > > > >> c.   Launch connection via JNLP
> > > > > >> 2.  Apache forks MXNet repo and makes sure builds are
> > successful
> > > > on
> > > > > >> their build set up
> > > > > >> a.  Ask Apache to give me committer rights
> > > > > >> b.  I remove the Windows jobs until a later time
> > > > > >> c.   Apache sets up Jenkins jobs and Github webhooks
> > > > > >>   i.
> > > > > >> Build every commit and origin/fork PR’s without merge (main
> > > > Jenkinsfile)
> > > > > >> ii.
> > > > > >> Nightly job (nightly Jenkins file, will start with a dummy one
> and
> > > add
> > > > > more
> > > > > >> configurations later)
> > > > > >> d.  If Windows slave setup is available, provide it to
> Apache
> > > and
> > > > > >> enable the jobs again
> > > > > >> 3.  Transfer the repo and point the build set up there
> > > > > >> 4.  Apache deploys the docs to their website
> > > > > >>
> > > > > >> Open security questions:
> > > > > >> 1.  How can we ensure that our slaves are not used by other
> > > > > projects?
> > > > > >> a.  It’s not, it’s a social contract.
> > > > > >> 2.  To protect the slave hosts, would running Jenkins slave
> > > > inside a
> > > > > >> Docker container be a solution, or is there a recommended best
> > > > practice?
> > > > > >> a.  Run slave behind a NAT gateway and launch via JNLP
> > > > > >> 3.  Does Apache place SSH key inside the build host for Docs
> > > > > >> deployment to the website? Are there security concerns there?
> > > > > >> a.  The only slaves that are allowed to deploy docs are
> > > > > >> ASF-controlled. Just provide the build command.
> > > > > >>
> > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


[alpha release] MXNet for Keras v1.2.2

2017-07-09 Thread Dominic Divakaruni
An Alpha release for Keras v1.2.2 with support for MXNet as a back end is
now available! Keras v1.2.2 users now have the ability to go from idea to a
trained model with as little time as possible. :) Check out
https://github.com/dmlc/keras/releases/tag/alpha for more information.



Installation requires that you build MXNet from source, as some of the code
to support Keras has been merged following the v0.10 release. The next
MXNet release (our first Apache release) will produce pip install packages
that will make installation much easier.



Please test and open issues for bugs or questions here
https://github.com/dmlc/keras/issues


Big thanks to @skm, @howard0su , @yajiedesign
, @piiswrong ,
@wayao for this work!


Re: ZeroMQ licensing in Apache MXNet

2017-07-10 Thread Dominic Divakaruni
Greg, et al, do you believe this is a non-issue and resolved based on what
Mu has said?

On Fri, Jul 7, 2017 at 9:38 AM, Mu Li  wrote:

> ZeroMQ is used only if setting `USE_DIST_KVSTORE = 1` during compilation.
> In default, it is 0.
>
> The source codes are close to the following:
>
> #if MXNET_USE_DIST_KVSTORE
> #include "zmq.h"
> #endif  // MXNET_USE_DIST_KVSTORE
>
> Replacing ZeroMQ by another similar library is straightforward, but it is
> marked as low priority because only a small portion of users wants to
> compile with USE_DIST_KVSTORE = 1.
>
> On Fri, Jul 7, 2017 at 1:54 AM, Greg Stein  wrote:
>
> > If it is optional at compile-time, then a header file is very allowable.
> As
> > long as MXNet can be compiled without ZeroMQ on the box, then I see no
> > issue at all.
> >
> > On Thu, Jul 6, 2017 at 3:51 PM, Felix Cheung 
> > wrote:
> >
> > > Isn't the release binaries going to contain bits from zeromq because of
> > > #include though?
> > >
> > > That header file is still going to be LGPL 3.0 licensed right?
> > >
> > >
> > > On Thu, Jul 6, 2017 at 12:45 PM John D. Ament 
> > > wrote:
> > >
> > > > Mu,
> > > >
> > > > So what happens when ZeroMQ is not available, do you fall back to
> > > something
> > > > else?
> > > >
> > > > I'm inclined to say that this is allowable, knowing that its an
> > optional
> > > > dynamically linked dependency that has an alternative.  Assuming it
> has
> > > an
> > > > alternative.
> > > >
> > > > I would strongly encourage podlings to try to leverage what the ASF
> > > > provides, we ship a number of messaging systems that may be better
> > from a
> > > > licensing stand point - ActiveMQ, RocketMQ, Pulsar.
> > > >
> > > > John
> > > >
> > > > On Thu, Jul 6, 2017 at 3:27 PM Mu Li  wrote:
> > > >
> > > > > MXNet's backend is written in C++, which is not able to use the
> > > > > java interface.
> > > > >
> > > > > On Thu, Jul 6, 2017 at 12:25 PM, Luciano Resende <
> > luckbr1...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > Are you guys able to use this (which is what we use in Apache
> > Toree)?
> > > > > >
> > > > > > https://github.com/zeromq/jeromq
> > > > > >
> > > > > > Which has been successfully relicensed?
> > > > > > https://github.com/zeromq/jeromq/blob/master/LICENSE
> > > > > >
> > > > > >
> > > > > > On Wed, Jul 5, 2017 at 11:23 PM, Henri Yandell <
> bay...@apache.org>
> > > > > wrote:
> > > > > >
> > > > > > > One of the items that is on the list to do before releasing
> > Apache
> > > > > MXNet
> > > > > > is
> > > > > > > removing ZeroMQ from the codebase/dependencies.
> > > > > > >
> > > > > > > ZeroMQ is licensed under the LGPL 3.0 with an exception for
> > static
> > > > > > > compiling.
> > > > > > >
> > > > > > > They have long been interested in relicensing to MPL 2.0, but
> > > haven't
> > > > > > made
> > > > > > > much progress, though they did relicense JeroMQ (Java
> > > > > > > wrapper/implementaiton) last year.
> > > > > > >
> > > > > > > In the last few months they've made a lot of progress towards
> > > > > > relicensing:
> > > > > > > https://github.com/zeromq/libzmq/tree/master/RELICENSE
> > > > > > >
> > > > > > > I'd like to ask on legal-discuss@ for an exception (one year?)
> > to
> > > > > > continue
> > > > > > > using ZeroMQ, with prominent documentation, in MXNet given the
> > > trend
> > > > > > > towards MPL 2.0.
> > > > > > >
> > > > > > > Any concerns before I do so?
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > > Hen
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Luciano Resende
> > > > > > http://twitter.com/lresende1975
> > > > > > http://lresende.blogspot.com/
> > > > > >
> > > > >
> > > >
> > >
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: Invitation Request

2017-07-12 Thread Dominic Divakaruni
I've invited you both

On Wed, Jul 12, 2017 at 7:18 PM, 梁德澎  wrote:

> Sorry bo, I can't invite you to the MXNet slack channel, because I am not
> in the channel.  :(
>
> Best
> Depeng
>
> 2017-07-13 4:59 GMT+08:00 Bo Hu :
>
> > Hi,
> >
> > I wonder could you please invite me to the Mxnet slack channel?
> >
> > Best,
> > Bo
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Jenkins tests for Perl

2017-07-25 Thread Dominic Divakaruni
Hi Ly,
Welcome back from vaca. Hope you had a good one. Sergey has been looking to
get Jenkins tests in place for Perl. Here is his PR,
https://github.com/dmlc/mxnet/pull/7170

Can you please help ?

-Dom


Re: Join mxnet slack channel

2017-07-26 Thread Dominic Divakaruni
hello,
Xiaoyuan, i've added you. You should see an invite.

On Tue, Jul 25, 2017 at 11:35 AM, XiaoYuan Zhu  wrote:

> Hi,
>
> I am a data scientist in capital one. I would like to join mxnet channel.
>
> Thanks,
> Xiaoyuan
>
> Sent from my iPhone
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Code contribution to Apache MXNet with a BSD 3 clause

2017-08-02 Thread Dominic Divakaruni
Hi All,
We've worked with our friends in Cupertino to build a tool that converts
MXNet models to their CoreML format so you can build iOS and MacOS apps
with it in XCode.

Apple had done the initial work on this tool and sent us their code to
finish and maintain going forward. We are ready to make it available for
use and would like to make a PR to commit it to Apache MXNet in the next
day.

Attached is their license file that goes along with the code. Please let me
know if there are any issues or concerns with this.


Thanks!
-- 

Dominic Divakaruni
206.475.9200 Cell
Copyright (c) 2017, Apple Inc. All rights reserved.

Redistribution and use in source and binary forms, with or without 
modification, are permitted provided that the following conditions are met:  

1.  Redistributions of source code must retain the above copyright notice, this 
list of conditions and the following disclaimer.

2.  Redistributions in binary form must reproduce the above copyright notice, 
this list of conditions and the following disclaimer in the documentation 
and/or other materials provided with the distribution.

3.  Neither the name of the copyright holder(s) nor the names of any 
contributors may be used to endorse or promote products derived from this 
software without specific prior written permission.

THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND 
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED 
WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE 
DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR 
ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES 
(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; 
LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON 
ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS 
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.


Re: Code contribution to Apache MXNet with a BSD 3 clause

2017-08-03 Thread Dominic Divakaruni
Legal-discuss@ to Bcc

Thanks, John. The code will be part of Apache MXNet and our team will
commit to maintaining this. I will reach out to Apple, but I am not
optimistic they will make any changes.

Sounds like we are ok either way. So thanks!

Dom

On Thu, Aug 3, 2017 at 3:26 AM, John D. Ament  wrote:

> Ideally they would provide the contribution under the Apache License, v2.
> Are they maintaining these files going forward, or are they ending up in
> the Apache MXNet repository?  Would they be agreeable to providing an ICLA
> or an SGA?
>
> Either way, this falls under the Incubator's IP Clearance policy.  Please
> continue this discussion on general@incubator.
>
> John
>
> On Thu, Aug 3, 2017 at 4:10 AM Dominic Divakaruni <
> dominic.divakar...@gmail.com> wrote:
>
> > Hi All,
> > We've worked with our friends in Cupertino to build a tool that converts
> > MXNet models to their CoreML format so you can build iOS and MacOS apps
> > with it in XCode.
> >
> > Apple had done the initial work on this tool and sent us their code to
> > finish and maintain going forward. We are ready to make it available for
> > use and would like to make a PR to commit it to Apache MXNet in the next
> > day.
> >
> > Attached is their license file that goes along with the code. Please let
> > me know if there are any issues or concerns with this.
> >
> >
> > Thanks!
> > --
> >
> > Dominic Divakaruni
> > 206.475.9200 <(206)%20475-9200> Cell
> >
> > ---------
> > To unsubscribe, e-mail: legal-discuss-unsubscr...@apache.org
> > For additional commands, e-mail: legal-discuss-h...@apache.org
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: Code contribution to Apache MXNet with a BSD 3 clause

2017-08-03 Thread Dominic Divakaruni
+general@incubator

On Thu, Aug 3, 2017 at 8:22 AM Dominic Divakaruni <
dominic.divakar...@gmail.com> wrote:

> Legal-discuss@ to Bcc
>
> Thanks, John. The code will be part of Apache MXNet and our team will
> commit to maintaining this. I will reach out to Apple, but I am not
> optimistic they will make any changes.
>
> Sounds like we are ok either way. So thanks!
>
> Dom
>
> On Thu, Aug 3, 2017 at 3:26 AM, John D. Ament 
> wrote:
>
>> Ideally they would provide the contribution under the Apache License, v2.
>> Are they maintaining these files going forward, or are they ending up in
>> the Apache MXNet repository?  Would they be agreeable to providing an ICLA
>> or an SGA?
>>
>> Either way, this falls under the Incubator's IP Clearance policy.  Please
>> continue this discussion on general@incubator.
>>
>> John
>>
>> On Thu, Aug 3, 2017 at 4:10 AM Dominic Divakaruni <
>> dominic.divakar...@gmail.com> wrote:
>>
>> > Hi All,
>> > We've worked with our friends in Cupertino to build a tool that converts
>> > MXNet models to their CoreML format so you can build iOS and MacOS apps
>> > with it in XCode.
>> >
>> > Apple had done the initial work on this tool and sent us their code to
>> > finish and maintain going forward. We are ready to make it available for
>> > use and would like to make a PR to commit it to Apache MXNet in the next
>> > day.
>> >
>> > Attached is their license file that goes along with the code. Please let
>> > me know if there are any issues or concerns with this.
>> >
>> >
>> > Thanks!
>> > --
>> >
>> > Dominic Divakaruni
>> > 206.475.9200 <(206)%20475-9200> Cell
>> >
>> > -
>> > To unsubscribe, e-mail: legal-discuss-unsubscr...@apache.org
>> > For additional commands, e-mail: legal-discuss-h...@apache.org
>>
>
>
>
> --
>
>
> Dominic Divakaruni
> 206.475.9200 Cell
>
-- 


Dominic Divakaruni
206.475.9200 Cell


Re: Announcement of DMLC/TVM Our Deep Learning Compilation Stack

2017-08-17 Thread Dominic Divakaruni
Congratulations!!

Can you share what you are thinking with regards to how you propose to
integrate TVM into MXNet?

On Thu, Aug 17, 2017 at 12:52 PM Minjie Wang  wrote:

> Great news! This is a huge step towards a highly efficient deep learning
> system that is portable on different hardware. Thanks Tianqi and the
> efforts of all the contributors.
>
> On Thu, Aug 17, 2017 at 3:41 PM, Tianqi Chen 
> wrote:
>
> > Hi Guys:
> >I am super excited to announce DMLC/TVM  our deep learning compilation
> > stack. There will be followups on mxnet to add the official support soon.
> > To check what it is, see the announcement
> >
> > http://tvmlang.org/2017/08/17/tvm-release-announcement.html
> >
> >
> > Tianqi
> >
>
>
>
> --
> Minjie Wang
> *New York University | Computer Science*
> 715 Broadway, New York, NY, 10009
>
-- 


Dominic Divakaruni
206.475.9200 Cell


Re: [VOTE] Apache MXNet (incubating) 0.11.0 release RC3

2017-08-29 Thread Dominic Divakaruni
Thanks for the reply, John. None of the mentors have voted so far.
Henri, Suneel, Marcus, Sebastian, can you gents please review and vote?

Also, Henri, didn't you mention that there was an SGA for this project?
Sorry if I don't recollect the exact details on the SGA bit.

Dom


On Tue, Aug 29, 2017 at 5:27 AM, John D. Ament 
wrote:

> Non pmc members can vote non-binding.  Usually mentors review releases.
> Have any of your mentors reviewed and voted on it?  Due to there being no
> SGA its a harder release to review.  I also need to cross check ICLAs and
> files that have changed license.
>
> On Aug 29, 2017 8:13 AM, "Dominic Divakaruni" <
> dominic.divakar...@gmail.com>
> wrote:
>
> > Can this vote pass without the three +1's from the PMC? Can the
> committers
> > for this project provide binding votes on general@ to weigh in on this
> > release?
> >
> > On Mon, Aug 28, 2017 at 5:47 PM, Meghna Baijal <
> meghnabaijal2...@gmail.com
> > >
> > wrote:
> >
> > > Hi All,
> > > This is a reminder that the vote to release MXNet (incubating) 0.11.0
> is
> > > still open.
> > > The vote will close on Tuesday, August 29, 2017 8.04 PM UTC.
> > >
> > > [ ] +1 Release this package as 0.1.0
> > > [ ] +0 no opinion
> > > [ ] -1 Do not release this package because…
> > >
> > > Thanks,
> > > Meghna
> > >
> > >
> > >
> > > > On Aug 25, 2017, at 1:04 PM, Meghna Baijal <
> meghnabaijal2...@gmail.com
> > >
> > > wrote:
> > > >
> > > > Hi all
> > > >
> > > > This is a call for a releasing Apache MXNet (incubating) 0.11.0,
> > release
> > > > candidate 3.
> > > >
> > > > Apache MXNet community has voted and approved the release.
> > > >
> > > > Vote thread:
> > > > https://lists.apache.org/thread.html/2695a598ae0622484d4c886dc5b2ea
> > > 823c306ca4ebef66accec6ee76@%3Cdev.mxnet.apache.org%3E <
> > > https://lists.apache.org/thread.html/2695a598ae0622484d4c886dc5b2ea
> > > 823c306ca4ebef66accec6ee76@%3Cdev.mxnet.apache.org%3E>
> > > >
> > > >
> > > > Result thread:
> > > > https://lists.apache.org/thread.html/d860c49194ec71c5c83ac0fa68df13
> > > 050dbfada4ff7052be3401fc1b@%3Cdev.mxnet.apache.org%3E <
> > > https://lists.apache.org/thread.html/d860c49194ec71c5c83ac0fa68df13
> > > 050dbfada4ff7052be3401fc1b@%3Cdev.mxnet.apache.org%3E>
> > > >
> > > >
> > > > The source tarball, including signatures, digests, etc. can be found
> > at:
> > > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/ <
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/>
> > > >
> > > >
> > > > The release tag can be found here:
> > > > https://github.com/apache/incubator-mxnet/tree/0.11.0.rc3 <
> > > https://github.com/apache/incubator-mxnet/tree/0.11.0.rc3>
> > > >
> > > >
> > > > The release hash is ba6413d29769075dd883ec5fe6eb24afc98fb3fd and can
> > be
> > > found here:
> > > > https://github.com/apache/incubator-mxnet/commit/
> > > ba6413d29769075dd883ec5fe6eb24afc98fb3fd <https://github.com/apache/
> > > incubator-mxnet/commit/ba6413d29769075dd883ec5fe6eb24afc98fb3fd>
> > > >
> > > >
> > > > Release artifacts are signed with the following key:
> > > > AA3EBCC3E65A768AE3D2A64B8EF47B8720E8C549
> > > >
> > > >
> > > > KEY files are available here:
> > > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/ <
> > > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/>
> > > >
> > > >
> > > > For information about the contents of this release, see:
> > > > https://cwiki.apache.org/confluence/display/MXNET/v0.
> > > 11.0+Release+Notes+-+MXNet+v0.11+Release+Candidate <
> > > https://cwiki.apache.org/confluence/display/MXNET/v0.
> > > 11.0+Release+Notes+-+MXNet+v0.11+Release+Candidate>
> > > >
> > > >
> > > > The vote will be open for at least 72 hours.
> > > >
> > > > [ ] +1 Release this package as 0.1.0
> > > > [ ] +0 no opinion
> > > > [ ] -1 Do not release this package because...
> > > >
> > > > Thanks.
> > >
> > >
> >
> >
> > --
> >
> >
> > Dominic Divakaruni
> > 206.475.9200 Cell
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: [VOTE] Apache MXNet (incubating) 0.11.0 release RC3

2017-08-30 Thread Dominic Divakaruni
+ dev@
This is great feedback. Thanks to the PMC for reviewing. Megna is on point
to track all this feedback, capture action items on the wiki and Jira..

Overall it looks like we have 3 binding +1's.

Are we good to proceed with making the release official?

Best,
Dom

On Tue, Aug 29, 2017 at 10:37 PM, Henri Yandell  wrote:

> +1 (binding) to the release.
>
> Minor items (fix next release):
>
> * R-package/ directory is empty (needs removing as confusing, or a README
> could be added explaining the code is not present and can be found outside
> of Apache).
> * Agreed with Justin that there needs to be a Getting Started text file of
> some kind. This could be a link from the README.md to the docs/get_started
> directory perhaps; though .md format isn't the easiest to read (ie: link to
> website and over time we should consider whether a local version is
> needed).
> * CONTRIBUTORS.md calls the project DMLC/MXNet.
> * The NEWS.md refers to 0.11.0-rc3 as the latest version. This should refer
> to the version being released rather than the rc3.
> * The README.md also refers to rc3. It shouldn't refer to rc3 as a release,
> and ideally it would refer to 0.11.0 as a release (though tricky to be
> forward looking given that GitHub treats that as a homepage). Randomly
> noting that the What's New should be dated.
> * The README.md refers to the copyright being owned by Contributors. Needs
> updating to a license statement (with NOTICE handling the copyright side of
> things).
>
> There's a lot of continual cleanup to do here; but given that this is a
> project that has previously been released (pre-apache), and many of these
> items are an issue in the old version (ie: existing users have already
> dealt with things like no direct link from download to how to get
> started/build etc), I don't see anything blocking.
>
> Hen
>
>
> On Fri, Aug 25, 2017 at 1:04 PM, Meghna Baijal  >
> wrote:
>
> > Hi all
> >
> > This is a call for a releasing Apache MXNet (incubating) 0.11.0, release
> > candidate 3.
> >
> > Apache MXNet community has voted and approved the release.
> >
> > Vote thread:
> > https://lists.apache.org/thread.html/2695a598ae0622484d4c886dc5b2ea
> > 823c306ca4ebef66accec6ee76@%3Cdev.mxnet.apache.org%3E <
> > https://lists.apache.org/thread.html/2695a598ae0622484d4c886dc5b2ea
> > 823c306ca4ebef66accec6ee76@%3Cdev.mxnet.apache.org%3E>
> >
> >
> > Result thread:
> > https://lists.apache.org/thread.html/d860c49194ec71c5c83ac0fa68df13
> > 050dbfada4ff7052be3401fc1b@%3Cdev.mxnet.apache.org%3E <
> > https://lists.apache.org/thread.html/d860c49194ec71c5c83ac0fa68df13
> > 050dbfada4ff7052be3401fc1b@%3Cdev.mxnet.apache.org%3E>
> >
> >
> > The source tarball, including signatures, digests, etc. can be found at:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/ <
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/>
> >
> >
> > The release tag can be found here:
> > https://github.com/apache/incubator-mxnet/tree/0.11.0.rc3 <
> > https://github.com/apache/incubator-mxnet/tree/0.11.0.rc3>
> >
> >
> > The release hash is ba6413d29769075dd883ec5fe6eb24afc98fb3fd and can be
> > found here:
> > https://github.com/apache/incubator-mxnet/commit/
> > ba6413d29769075dd883ec5fe6eb24afc98fb3fd <https://github.com/apache/
> > incubator-mxnet/commit/ba6413d29769075dd883ec5fe6eb24afc98fb3fd>
> >
> >
> > Release artifacts are signed with the following key:
> > AA3EBCC3E65A768AE3D2A64B8EF47B8720E8C549
> >
> >
> > KEY files are available here:
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/ <
> > https://dist.apache.org/repos/dist/dev/incubator/mxnet/0.11.0.rc3/>
> >
> >
> > For information about the contents of this release, see:
> > https://cwiki.apache.org/confluence/display/MXNET/v0.
> > 11.0+Release+Notes+-+MXNet+v0.11+Release+Candidate <
> > https://cwiki.apache.org/confluence/display/MXNET/v0.
> > 11.0+Release+Notes+-+MXNet+v0.11+Release+Candidate>
> >
> >
> > The vote will be open for at least 72 hours.
> >
> > [ ] +1 Release this package as 0.1.0
> > [ ] +0 no opinion
> > [ ] -1 Do not release this package because...
> >
> > Thanks.
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


release launch

2017-08-30 Thread Dominic Divakaruni
We have an opportunity to use AWS and Apple (for the Core ML converter
part) to market this release via blogs and social media on Tuesday 9/6.
This will be a good way to promote the work of the Apache MXNet community.

The Apache release policy states "Please ensure that you wait at least 24
hours after uploading a new release before updating the project download
page and sending the announcement email(s)."

How do you all feel about waiting until Monday to upload the new release so
we can have a big splash on Tuesday 9/6?

Announcements could go out of:
announce@apache
dev@
AWS blogs and social media
Apple developer news feed

To use the release in the mean time we can just use RC3 as there are no
changes.


open issues

2017-08-30 Thread Dominic Divakaruni
fellow mxnet'ers, we have >1900 open issues on git. The most out of any
deep learning framework. I am eager to carve out some time to work on
reducing this backlog (to the extent of my technical ability). I'd like to
make this a team effort to make a meaningful impact. Any ideas? Would you
be open to an issue-clean-up-athon?


Re: [ANNOUNCE] Apache MXNet (incubating) 0.11.0 Release

2017-09-06 Thread Dominic Divakaruni
Congratulations on this major milestone! Pretty exciting to see this Apache
project make progress!!

On Wed, Sep 6, 2017 at 6:41 PM, Meghna Baijal 
wrote:

> The Apache MXNet community is happy to announce Apache MXNet version
> 0.11.0! We hit some major milestones with this release!
> This is our first official release as an incubating Apache project. The
> project has now fully migrated its codebase and website to Apache.
> This release includes code contributions from developers from Apple,
> Samsung, Microsoft and many other.
> We have also crossed over 400 contributors on the project so far. The
> 0.11.0 release features Apple Core ML model converter, Support for Keras
> v1.2.2.
>
> A blog that explains an end to end use case of building an ios app using
> MXNet and Core ML:
> https://aws.amazon.com/blogs/ai/bring-machine-learning-to-
> ios-apps-using-apache-mxnet-and-apple-core-ml/ <
> https://aws.amazon.com/blogs/ai/bring-machine-learning-to-
> ios-apps-using-apache-mxnet-and-apple-core-ml/>
>
> The AWS blog that highlights the key features of the release:
> https://aws.amazon.com/blogs/ai/apple-core-ml-and-keras-
> support-now-available-for-apache-mxnet/ <https://aws.amazon.com/blogs/
> ai/apple-core-ml-and-keras-support-now-available-for-apache-mxnet/>
>
> A full list of the changes in this release can be found in the release
> notes:
> https://cwiki.apache.org/confluence/display/MXNET/
> MXNet+0.11.0+Release+Notes <https://cwiki.apache.org/
> confluence/display/MXNET/MXNet+0.11.0+Release+Notes>
>
> Link to Download: http://www.apache.org/dist/incubator/mxnet/ <
> http://www.apache.org/dist/incubator/mxnet/>
>
> To build this project, view this page and select “Build from Source”:
> http://mxnet.incubator.apache.org/get_started/install.html <
> http://mxnet.incubator.apache.org/get_started/install.html>
>
> The Docker Images can be found here:
> https://hub.docker.com/u/mxnet/ <https://hub.docker.com/u/mxnet/>
>
> The Pip package can be found here:
> https://pypi.python.org/pypi/mxnet <https://pypi.python.org/pypi/mxnet>
>
> The Release Tag is here:
> https://github.com/apache/incubator-mxnet/tree/0.11.0 <
> https://github.com/apache/incubator-mxnet/tree/0.11.0>
>
> MXNet Resources
>- Issues: https://github.com/apache/incubator-mxnet/issues <
> https://github.com/apache/incubator-mxnet/issues>
>- Wiki: https://cwiki.apache.org/confluence/display/MXNET <
> https://cwiki.apache.org/confluence/display/MXNET>
>- Mailing list(s): dev@mxnet.incubator.apache.org  dev@mxnet.incubator.apache.org>
>
> For more information on Apache MXNet, please see:
> https://mxnet.incubator.apache.org/ <https://mxnet.incubator.apache.org/>
>
> Thanks!
> Apache MXNet(incubating) Team
> ___
>
> DISCLAIMER:
> Apache MXNet (incubating) is an effort undergoing incubation at The
> Apache Software Foundation (ASF), sponsored by the name of Apache
> Incubator PMC. Incubation is required of all newly accepted
> projects until a further review indicates that the
> infrastructure, communications, and decision making process have
> stabilized in a manner consistent with other successful ASF
> projects. While incubation status is not necessarily a reflection
> of the completeness or stability of the code, it does indicate
> that the project has yet to be fully endorsed by the ASF.
>
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: MXNet website redesign

2017-09-13 Thread Dominic Divakaruni
I think this: >The pPMC would like to update the website.

is just to say that, the project's mentors reached out to the folks who
helped with the infra migration with a reminder about the website updates
that are needed.

I don't believe that a decision was made out of sight. Though I do fully
agree that this reminder and the follow on website design should have been
discussed on dev@ first.


On Tue, Sep 12, 2017 at 11:48 PM, Isabel Drost-Fromm 
wrote:

> Hi,
>
> First of all, great to read there are people with interest and time to
> update the website.
>
>
> Am 12. September 2017 19:00:21 MESZ schrieb Seb Kiureghian <
> sebou...@gmail.com>:
> >The pPMC would like to update the website.
>
> This wording is worrying to me. It sounds like a decision was made out of
> sight of the public project community, that is out of sight of dev@
>
> The only way to grow your community and make it possible for outsiders is
> to have  conversations leading up to decisions like these here on dev@.
> Maybe I missed those, happy for any pointers.
>
>
> Isabel
>
>
>
>
> --
> Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet.
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-09-25 Thread Dominic Divakaruni
cool!! Sounds awesome.

Separately, Nvidia is working on Cuda 9 and Nvcc support in mxnet for the
V100 that is going to launch sometime this fall. Hoping mxnet will be the
first framework out there with a version that supports volta to its full
extent!



On Mon, Sep 25, 2017 at 1:20 PM, Tianqi Chen 
wrote:

> I am primarily working on deep learning compilation
> https://github.com/dmlc/tvm and hopefully you will hear the related
> updates
> in MXNet soon :)
>
> Tianqi
>
> On Mon, Sep 25, 2017 at 12:45 PM, Joern Kottmann 
> wrote:
>
> > Hello all,
> >
> > I am working on the Java API and frequently update my jvm-package branch
> > here:
> > https://github.com/kottmann/mxnet/commits/jvm-package
> >
> > Currently I focus on NDArray and Symbol/Executor, my short term goal
> > is to get the MNIST sample running.
> >
> > Anyone interested to help out?
> >
> > There are many more APIs that have to be implemented and we need to
> > find some way to do testing effectively in non-python APIs.
> >
> > Jörn
> >
> >
> > On Mon, Sep 25, 2017 at 8:23 PM, Seb Kiureghian 
> > wrote:
> > > Hey dev@,
> > >
> > > In the spirit of bringing more activity to the mailing lists and
> growing
> > > the community, can everyone who is working on MXNet please share what
> > > you're working on?
> > >
> > > I'm working on
> > > -Redesigning the website
> > > <https://mxnet.incubator.apache.org/versions/master/index.html>.
> > > -Setting up a forum for user support.
> > >
> > > Seb Kiureghian
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-09-26 Thread Dominic Divakaruni
That's great, YiZhi. Workday uses the Scala package and was looking for a
maven distro for v0.11. When do you think you'll have one up?

On Tue, Sep 26, 2017 at 8:58 AM, YiZhi Liu  wrote:

> I'm currently working on maven deploy for scala package.
>
> 2017-09-26 16:00 GMT+08:00 Zihao Zheng :
> > I’m working on standalone TensorBoard, https://github.com/dmlc/
> tensorboard <https://github.com/dmlc/tensorboard>, currently we’ve
> support several features in original TensorBoard from TensorFlow in pure
> Python without any DL framework dependency.
> >
> > Recently I’m trying to bring more features to this standalone version,
> but seems not very trivial as it depends on TensorFlow. Any advice are
> welcomed and looking for help.
> >
> > Thanks,
> > Zihao
> >
> >> 在 2017年9月26日,下午1:58,sandeep krishnamurthy 
> 写道:
> >>
> >> I am currently working with Jiajie Chen (https://github.com/jiajiechen/)
> on
> >> building an automated periodic benchmarking framework to run various
> >> standard MXNet training jobs with both Symbolic and Gluon interface.
> This
> >> framework will run following standard training jobs on a nightly and
> weekly
> >> basis helping us to track performance improvements or regression early
> in
> >> the development cycle of MXNet. Both CPU and GPU instances are used
> >> capturing various metrics like training accuracy, validation accuracy,
> >> convergence, memory consumption, speed.
> >>
> >> To start with, we will be running Resnet50, Resnet152 on CIFAR and
> >> Synthetic Dataset. And, few more RNN and Bidirectional LSTM training
> jobs.
> >>
> >> Thanks,
> >> Sandeep
> >>
> >>
> >> On Mon, Sep 25, 2017 at 8:00 PM, Henri Yandell 
> wrote:
> >>
> >>> Getting an instance of github.com/amzn/oss-dashboard setup for mxnet.
> >>>
> >>> Hopefully useful to write custom metric analysis; like: "most pull
> requests
> >>> from non-committer" and "PRs without committer comment".
> >>>
> >>> Hen
> >>>
> >>> On Mon, Sep 25, 2017 at 11:24 Seb Kiureghian 
> wrote:
> >>>
> >>>> Hey dev@,
> >>>>
> >>>> In the spirit of bringing more activity to the mailing lists and
> growing
> >>>> the community, can everyone who is working on MXNet please share what
> >>>> you're working on?
> >>>>
> >>>> I'm working on
> >>>> -Redesigning the website
> >>>> <https://mxnet.incubator.apache.org/versions/master/index.html>.
> >>>> -Setting up a forum for user support.
> >>>>
> >>>> Seb Kiureghian
> >>>>
> >>>
> >>
> >>
> >>
> >> --
> >> Sandeep Krishnamurthy
> >
>
>
>
> --
> Yizhi Liu
> DMLC member
> Technical Manager
> Qihoo 360 Inc, Shanghai, China
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-10-02 Thread Dominic Divakaruni
👏

On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian  wrote:

> It would be awesome if MXNet were the first DL framework to support Nvidia
> Volta. What do you all think about cutting a v0.12 release once that
> integration is ready?
>
> On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu  wrote:
>
> > I had been working on the sparse tensor project with Haibin. After it was
> > wrapped up for the first stage, I started my work on the quantization
> > project (INT-8 inference). The benefits of using quantized models for
> > inference include much higher inference throughput than FP32 model with
> > acceptable accuracy loss and compact models saved on small devices. The
> > work currently aims at quantizing ConvNets, and we will consider
> expanding
> > it to RNN networks after getting good results for images. Meanwhile, it's
> > expected to support quantization on CPU, GPU, and mobile devices.
> >
>
-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-10-02 Thread Dominic Divakaruni
Seb is talking about support for Cuda 9 and cuDNN 7. Pull requests below.
@ptrendx and Dick Carter are working through some performance issues but
should be done in a week (hopefully).

Jun, Bhavin,
Tensor RT runtime is a different subject. Nvidia is helping build a
converter for MXNet models. Not sure on the ETA. Tensor RT helps accelerate
vision models on the V100, TX2, P4/40 etc...


   - Enabling persistent batch norm with cuDNN 7:
   https://github.com/apache/incubator-mxnet/pull/7876
   - Making mixed precision work with all optimizers:
   https://github.com/apache/incubator-mxnet/pull/7654
   - Faster IO pipeline needed for Volta:
   https://github.com/apache/incubator-mxnet/pull/7152​;
   - Expose Tell in RecordIO reader:
   https://github.com/dmlc/dmlc-core/pull/301


On Mon, Oct 2, 2017 at 8:44 PM, Bhavin Thaker 
wrote:

> Hi Seb: please use a different email thread for new topics of discussion.
>
> Hi Jun: I think Seb may be referring to Volta V100 support in MXNet and NOT
> P4/P40 inference accelerators.
>
> Corrections/clarifications welcome.
>
> Bhavin Thaker.
>
> On Mon, Oct 2, 2017 at 8:22 PM Jun Wu  wrote:
>
> > Thanks for your attention, Seb. We are inclined to be cautious on what
> can
> > claim for this project. TensorRT has already supported converting
> > TensorFlow and Caffe models to its compatible format for fast inference,
> > but not MXNet. In this sense, it may not be fair to claim MXNet as the
> > first one supporting Nvidia Volta.
> >
> > What we are working on is more experimental and research oriented. We
> want
> > to get the first-hand materials in our own hands by building a INT-8
> > inference prototype and have a thorough understanding on its strength and
> > limitation, rather than handing it off completely to TensorRT, which is
> > transparent to us. Considering that the project is experimental, it's
> still
> > too early to make a conclusion here as there are plenty of known/unknown
> > issues and unfinished work.
> >
> > On the other hand, we are glad to hear that Nvidia is working on
> supporting
> > model conversion from MXNet to TensorRT (Dom please correct me if I'm
> > mistaken). It would be super beneficial to MXNet on INT-8 if they could
> > open-source their work as we would be able to maintain and add new
> features
> > on our side.
> >
> >
> > On Mon, Oct 2, 2017 at 8:04 PM, Dominic Divakaruni <
> > dominic.divakar...@gmail.com> wrote:
> >
> > > 👏
> > >
> > > On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian 
> > wrote:
> > >
> > > > It would be awesome if MXNet were the first DL framework to support
> > > Nvidia
> > > > Volta. What do you all think about cutting a v0.12 release once that
> > > > integration is ready?
> > > >
> > > > On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu 
> wrote:
> > > >
> > > > > I had been working on the sparse tensor project with Haibin. After
> it
> > > was
> > > > > wrapped up for the first stage, I started my work on the
> quantization
> > > > > project (INT-8 inference). The benefits of using quantized models
> for
> > > > > inference include much higher inference throughput than FP32 model
> > with
> > > > > acceptable accuracy loss and compact models saved on small devices.
> > The
> > > > > work currently aims at quantizing ConvNets, and we will consider
> > > > expanding
> > > > > it to RNN networks after getting good results for images.
> Meanwhile,
> > > it's
> > > > > expected to support quantization on CPU, GPU, and mobile devices.
> > > > >
> > > >
> > > --
> > >
> > >
> > > Dominic Divakaruni
> > > 206.475.9200 Cell
> > >
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-10-02 Thread Dominic Divakaruni
on a separate but equally exciting note, how about we start to talk about a
1.0 release in the future and what everyone would want that to look like?
I'll start a separate thread. :)

On Mon, Oct 2, 2017 at 9:07 PM, Dominic Divakaruni <
dominic.divakar...@gmail.com> wrote:

> Seb is talking about support for Cuda 9 and cuDNN 7. Pull requests below.
> @ptrendx and Dick Carter are working through some performance issues but
> should be done in a week (hopefully).
>
> Jun, Bhavin,
> Tensor RT runtime is a different subject. Nvidia is helping build a
> converter for MXNet models. Not sure on the ETA. Tensor RT helps accelerate
> vision models on the V100, TX2, P4/40 etc...
>
>
>- Enabling persistent batch norm with cuDNN 7:
>https://github.com/apache/incubator-mxnet/pull/7876
><https://github.com/apache/incubator-mxnet/pull/7876>
>- Making mixed precision work with all optimizers:https://github.com/
>apache/incubator-mxnet/pull/7654
><https://github.com/apache/incubator-mxnet/pull/7654>
>- Faster IO pipeline needed for Volta:https://github.com/
>apache/incubator-mxnet/pull/7152
><https://github.com/apache/incubator-mxnet/pull/7152>​;
>- Expose Tell in RecordIO reader:https://github.com/
>dmlc/dmlc-core/pull/301
>
>
> On Mon, Oct 2, 2017 at 8:44 PM, Bhavin Thaker 
> wrote:
>
>> Hi Seb: please use a different email thread for new topics of discussion.
>>
>> Hi Jun: I think Seb may be referring to Volta V100 support in MXNet and
>> NOT
>> P4/P40 inference accelerators.
>>
>> Corrections/clarifications welcome.
>>
>> Bhavin Thaker.
>>
>> On Mon, Oct 2, 2017 at 8:22 PM Jun Wu  wrote:
>>
>> > Thanks for your attention, Seb. We are inclined to be cautious on what
>> can
>> > claim for this project. TensorRT has already supported converting
>> > TensorFlow and Caffe models to its compatible format for fast inference,
>> > but not MXNet. In this sense, it may not be fair to claim MXNet as the
>> > first one supporting Nvidia Volta.
>> >
>> > What we are working on is more experimental and research oriented. We
>> want
>> > to get the first-hand materials in our own hands by building a INT-8
>> > inference prototype and have a thorough understanding on its strength
>> and
>> > limitation, rather than handing it off completely to TensorRT, which is
>> > transparent to us. Considering that the project is experimental, it's
>> still
>> > too early to make a conclusion here as there are plenty of known/unknown
>> > issues and unfinished work.
>> >
>> > On the other hand, we are glad to hear that Nvidia is working on
>> supporting
>> > model conversion from MXNet to TensorRT (Dom please correct me if I'm
>> > mistaken). It would be super beneficial to MXNet on INT-8 if they could
>> > open-source their work as we would be able to maintain and add new
>> features
>> > on our side.
>> >
>> >
>> > On Mon, Oct 2, 2017 at 8:04 PM, Dominic Divakaruni <
>> > dominic.divakar...@gmail.com> wrote:
>> >
>> > > 👏
>> > >
>> > > On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian 
>> > wrote:
>> > >
>> > > > It would be awesome if MXNet were the first DL framework to support
>> > > Nvidia
>> > > > Volta. What do you all think about cutting a v0.12 release once that
>> > > > integration is ready?
>> > > >
>> > > > On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu 
>> wrote:
>> > > >
>> > > > > I had been working on the sparse tensor project with Haibin.
>> After it
>> > > was
>> > > > > wrapped up for the first stage, I started my work on the
>> quantization
>> > > > > project (INT-8 inference). The benefits of using quantized models
>> for
>> > > > > inference include much higher inference throughput than FP32 model
>> > with
>> > > > > acceptable accuracy loss and compact models saved on small
>> devices.
>> > The
>> > > > > work currently aims at quantizing ConvNets, and we will consider
>> > > > expanding
>> > > > > it to RNN networks after getting good results for images.
>> Meanwhile,
>> > > it's
>> > > > > expected to support quantization on CPU, GPU, and mobile devices.
>> > > > >
>> > > >
>> > > --
>> > >
>> > >
>> > > Dominic Divakaruni
>> > > 206.475.9200 Cell
>> > >
>> >
>>
>
>
>
> --
>
>
> Dominic Divakaruni
> 206.475.9200 <(206)%20475-9200> Cell
>



-- 


Dominic Divakaruni
206.475.9200 Cell


pymc3

2017-10-02 Thread Dominic Divakaruni
anyone interested in helping out with a MXNet backend for pymc3 now that
Theano is dead?

https://twitter.com/twiecki/status/914594840456900608

https://github.com/pymc-devs/pymc4_prototypes


tutorial for sentiment analysis

2017-10-03 Thread Dominic Divakaruni
check out this awesome new mxnet tutorial from O'Reilly on Sentiment
Analysis.

https://www.oreilly.com/ideas/sentiment-analysis-with-apache-mxnet


Re: Request for suggestions- Supporting onnx in mxnet

2017-10-18 Thread Dominic Divakaruni
very happy you are doing this Roshani!

On Wed, Oct 18, 2017 at 1:41 PM, Roshani Nagmote 
wrote:

> Hi guys,
>
>
> I am working on supporting ONNX <https://github.com/onnx/onnx> pre-trained
> models in Apache MXNet and would like to seek your opinion on the choice of
> implementation. I also have created a GitHub issue
> <https://github.com/apache/incubator-mxnet/issues/8319>. Supporting ONNX
> in
> MXNet will enable users to move between frameworks with their models, this
> will also enable MXNet project to be a part of the ONNX open standard and
> steer the direction of ONNX.
>
>
> For those who don’t know ONNX, ONNX is an open source format for AI models
> which enables models to be transferred between frameworks. Refer to
> https://github.com/onnx/onnx for more details.
>
>
> To implement the import/export functionality in MXNet, I propose to expose
> a MXNet python module “serde”(name taken from Apache Hive project) with the
> following methods supporting different formats:
>
> sym, params = mxnet.serde.import(other_format_file, other_format=‘onnx’)
>
> other_format_file =  mxnet.serde.export(mxnet_sym, mxnet_params, ‘onnx’)
>
>
> The implementation under the hood can be done in two ways:
>
>
> 1) Implement at the MXNet layer by parsing the ONNX model(in protobuf
> format) and turn into MXNet Symbolic operators and build MXNet model
> directly. Similarly, I can convert the MXNet model to ONNX format at this
> layer.
>
>
> 2) The DMLC community has released the nnvm/tvm complier and an
> intermediate representation of the models, refer:
> http://www.tvmlang.org/2017/10/06/nnvm/tvm-compiler-announcement.html
> <http://www.tvmlang.org/2017/10/06/nnvm-compiler-announcement.html>
>
> Based on the conversation on the GitHub issue
> <https://github.com/apache/incubator-mxnet/issues/8319> I opened, Mu
> mentioned that MXNet would use nnvm/tvm as the backend in the future.
>
>
> We could hook into this layer to implement the import/export functionality.
> nnvm/tvm has ONNX 0.1 version import implemented.
>
> For import,
>
>1.
>
>I will need to enhance nnvm/tvm’s importer to support ONNX 0.2
>2.
>
>Implement nnvm/tvm->mxnet symbolic operators.
>
> For export:
>
>
>1.
>
>mxnet->nnvm/tvm ( nnvm/tvm provides this implementation already)
>2.
>
>I will need to Implement nnvm/tvm>onnx.
>
>
> These are the pros and cons I see in the above approaches:
>
>1.
>
>Import/export at mxnet layer
>
> Pros:
>
>1.
>
>Stable APIs currently used by users.
>2.
>
>Larger Apache MXNet community of contributors.
>3.
>
>CI pipeline to catch bugs.
>4.
>
>Comparatively less time to implement and put it in the hands of the
>users.
>
> Cons:
>
>1.
>
>In the future we may have to reimplement at the nnvm/tvm layer, in case
>MXNet moves to the nnvm/tvm backend(assuming it will move).
>
>
>
>1.
>
>Import/export at nnvm/tvm layer
>
> Pros:
>
>1.
>
>Less engineering work in case mxnet moves to nnvm/tvm
>2.
>
>nnvm/tvm would become a hub to convert to different formats.
>3.
>
>nnvm operators are more in parity with mxnet’s gluon APIs this could be
>useful in case Gluon becomes the only standard that MXNet will support.
>
> Cons:
>
>1.
>
>Nascent project with few contributors
>2.
>
>Does not support all operators that exist in MXNet Symbolic API
>3.
>
>No CI Pipeline
>4.
>
>Current Apache MXNet project does not use nnvm/tvm backend
>5.
>
>mxnet->nnvm/tvm backend needs more testing and user feedback.
>
>
> Any suggestions on both of these approaches? From user's perspective, this
> will be an implementation detail that is not exposed.
>
> Thanks,
>
> Roshani
>



-- 


Dominic Divakaruni
206.475.9200 Cell