pymc3

2017-10-02 Thread Dominic Divakaruni
anyone interested in helping out with a MXNet backend for pymc3 now that
Theano is dead?

https://twitter.com/twiecki/status/914594840456900608

https://github.com/pymc-devs/pymc4_prototypes


Re: What's everyone working on?

2017-10-02 Thread Dominic Divakaruni
Seb is talking about support for Cuda 9 and cuDNN 7. Pull requests below.
@ptrendx and Dick Carter are working through some performance issues but
should be done in a week (hopefully).

Jun, Bhavin,
Tensor RT runtime is a different subject. Nvidia is helping build a
converter for MXNet models. Not sure on the ETA. Tensor RT helps accelerate
vision models on the V100, TX2, P4/40 etc...


   - Enabling persistent batch norm with cuDNN 7:
   https://github.com/apache/incubator-mxnet/pull/7876
   - Making mixed precision work with all optimizers:
   https://github.com/apache/incubator-mxnet/pull/7654
   - Faster IO pipeline needed for Volta:
   https://github.com/apache/incubator-mxnet/pull/7152​;
   - Expose Tell in RecordIO reader:
   https://github.com/dmlc/dmlc-core/pull/301


On Mon, Oct 2, 2017 at 8:44 PM, Bhavin Thaker 
wrote:

> Hi Seb: please use a different email thread for new topics of discussion.
>
> Hi Jun: I think Seb may be referring to Volta V100 support in MXNet and NOT
> P4/P40 inference accelerators.
>
> Corrections/clarifications welcome.
>
> Bhavin Thaker.
>
> On Mon, Oct 2, 2017 at 8:22 PM Jun Wu  wrote:
>
> > Thanks for your attention, Seb. We are inclined to be cautious on what
> can
> > claim for this project. TensorRT has already supported converting
> > TensorFlow and Caffe models to its compatible format for fast inference,
> > but not MXNet. In this sense, it may not be fair to claim MXNet as the
> > first one supporting Nvidia Volta.
> >
> > What we are working on is more experimental and research oriented. We
> want
> > to get the first-hand materials in our own hands by building a INT-8
> > inference prototype and have a thorough understanding on its strength and
> > limitation, rather than handing it off completely to TensorRT, which is
> > transparent to us. Considering that the project is experimental, it's
> still
> > too early to make a conclusion here as there are plenty of known/unknown
> > issues and unfinished work.
> >
> > On the other hand, we are glad to hear that Nvidia is working on
> supporting
> > model conversion from MXNet to TensorRT (Dom please correct me if I'm
> > mistaken). It would be super beneficial to MXNet on INT-8 if they could
> > open-source their work as we would be able to maintain and add new
> features
> > on our side.
> >
> >
> > On Mon, Oct 2, 2017 at 8:04 PM, Dominic Divakaruni <
> > dominic.divakar...@gmail.com> wrote:
> >
> > > 
> > >
> > > On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian 
> > wrote:
> > >
> > > > It would be awesome if MXNet were the first DL framework to support
> > > Nvidia
> > > > Volta. What do you all think about cutting a v0.12 release once that
> > > > integration is ready?
> > > >
> > > > On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu 
> wrote:
> > > >
> > > > > I had been working on the sparse tensor project with Haibin. After
> it
> > > was
> > > > > wrapped up for the first stage, I started my work on the
> quantization
> > > > > project (INT-8 inference). The benefits of using quantized models
> for
> > > > > inference include much higher inference throughput than FP32 model
> > with
> > > > > acceptable accuracy loss and compact models saved on small devices.
> > The
> > > > > work currently aims at quantizing ConvNets, and we will consider
> > > > expanding
> > > > > it to RNN networks after getting good results for images.
> Meanwhile,
> > > it's
> > > > > expected to support quantization on CPU, GPU, and mobile devices.
> > > > >
> > > >
> > > --
> > >
> > >
> > > Dominic Divakaruni
> > > 206.475.9200 Cell
> > >
> >
>



-- 


Dominic Divakaruni
206.475.9200 Cell


Cutting a v0.12 release for Volta V100 support in MXNet

2017-10-02 Thread Seb Kiureghian
We are close to making MXNet the first DL framework to support training and
inference on the new and super-fast Nvidia Volta V100. It would be great to
get MXNet in front of developers who are at the cutting edge of Deep
Learning.

What do you all think of a v0.12 release sometime this month to add V100
support to MXNet?

Seb


Re: What's everyone working on?

2017-10-02 Thread Bhavin Thaker
Hi Seb: please use a different email thread for new topics of discussion.

Hi Jun: I think Seb may be referring to Volta V100 support in MXNet and NOT
P4/P40 inference accelerators.

Corrections/clarifications welcome.

Bhavin Thaker.

On Mon, Oct 2, 2017 at 8:22 PM Jun Wu  wrote:

> Thanks for your attention, Seb. We are inclined to be cautious on what can
> claim for this project. TensorRT has already supported converting
> TensorFlow and Caffe models to its compatible format for fast inference,
> but not MXNet. In this sense, it may not be fair to claim MXNet as the
> first one supporting Nvidia Volta.
>
> What we are working on is more experimental and research oriented. We want
> to get the first-hand materials in our own hands by building a INT-8
> inference prototype and have a thorough understanding on its strength and
> limitation, rather than handing it off completely to TensorRT, which is
> transparent to us. Considering that the project is experimental, it's still
> too early to make a conclusion here as there are plenty of known/unknown
> issues and unfinished work.
>
> On the other hand, we are glad to hear that Nvidia is working on supporting
> model conversion from MXNet to TensorRT (Dom please correct me if I'm
> mistaken). It would be super beneficial to MXNet on INT-8 if they could
> open-source their work as we would be able to maintain and add new features
> on our side.
>
>
> On Mon, Oct 2, 2017 at 8:04 PM, Dominic Divakaruni <
> dominic.divakar...@gmail.com> wrote:
>
> > 
> >
> > On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian 
> wrote:
> >
> > > It would be awesome if MXNet were the first DL framework to support
> > Nvidia
> > > Volta. What do you all think about cutting a v0.12 release once that
> > > integration is ready?
> > >
> > > On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu  wrote:
> > >
> > > > I had been working on the sparse tensor project with Haibin. After it
> > was
> > > > wrapped up for the first stage, I started my work on the quantization
> > > > project (INT-8 inference). The benefits of using quantized models for
> > > > inference include much higher inference throughput than FP32 model
> with
> > > > acceptable accuracy loss and compact models saved on small devices.
> The
> > > > work currently aims at quantizing ConvNets, and we will consider
> > > expanding
> > > > it to RNN networks after getting good results for images. Meanwhile,
> > it's
> > > > expected to support quantization on CPU, GPU, and mobile devices.
> > > >
> > >
> > --
> >
> >
> > Dominic Divakaruni
> > 206.475.9200 Cell
> >
>


Re: What's everyone working on?

2017-10-02 Thread Chris Olivier
+1


On Mon, Oct 2, 2017 at 8:04 PM Dominic Divakaruni <
dominic.divakar...@gmail.com> wrote:

> 
>
> On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian  wrote:
>
> > It would be awesome if MXNet were the first DL framework to support
> Nvidia
> > Volta. What do you all think about cutting a v0.12 release once that
> > integration is ready?
> >
> > On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu  wrote:
> >
> > > I had been working on the sparse tensor project with Haibin. After it
> was
> > > wrapped up for the first stage, I started my work on the quantization
> > > project (INT-8 inference). The benefits of using quantized models for
> > > inference include much higher inference throughput than FP32 model with
> > > acceptable accuracy loss and compact models saved on small devices. The
> > > work currently aims at quantizing ConvNets, and we will consider
> > expanding
> > > it to RNN networks after getting good results for images. Meanwhile,
> it's
> > > expected to support quantization on CPU, GPU, and mobile devices.
> > >
> >
> --
>
>
> Dominic Divakaruni
> 206.475.9200 Cell
>


Re: What's everyone working on?

2017-10-02 Thread Dominic Divakaruni


On Mon, Oct 2, 2017 at 8:02 PM Seb Kiureghian  wrote:

> It would be awesome if MXNet were the first DL framework to support Nvidia
> Volta. What do you all think about cutting a v0.12 release once that
> integration is ready?
>
> On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu  wrote:
>
> > I had been working on the sparse tensor project with Haibin. After it was
> > wrapped up for the first stage, I started my work on the quantization
> > project (INT-8 inference). The benefits of using quantized models for
> > inference include much higher inference throughput than FP32 model with
> > acceptable accuracy loss and compact models saved on small devices. The
> > work currently aims at quantizing ConvNets, and we will consider
> expanding
> > it to RNN networks after getting good results for images. Meanwhile, it's
> > expected to support quantization on CPU, GPU, and mobile devices.
> >
>
-- 


Dominic Divakaruni
206.475.9200 Cell


Re: What's everyone working on?

2017-10-02 Thread Seb Kiureghian
It would be awesome if MXNet were the first DL framework to support Nvidia
Volta. What do you all think about cutting a v0.12 release once that
integration is ready?

On Wed, Sep 27, 2017 at 10:38 PM, Jun Wu  wrote:

> I had been working on the sparse tensor project with Haibin. After it was
> wrapped up for the first stage, I started my work on the quantization
> project (INT-8 inference). The benefits of using quantized models for
> inference include much higher inference throughput than FP32 model with
> acceptable accuracy loss and compact models saved on small devices. The
> work currently aims at quantizing ConvNets, and we will consider expanding
> it to RNN networks after getting good results for images. Meanwhile, it's
> expected to support quantization on CPU, GPU, and mobile devices.
>


Re: MXNet Slack Channel

2017-10-02 Thread Kenta Iwasaki
Hi Seb,

Might you invite me as well?

Many thanks,
Kenta Iwasaki

On Tue, Oct 3, 2017 at 6:07 AM, Seb Kiureghian  wrote:

> invited
>
> On Mon, Oct 2, 2017 at 3:00 PM, Joshua Arnold  >
> wrote:
>
> > I would like to join the MXNet slack channel.
> >
> > Thanks,
> > Josh
> >
>


Re: PR builds are currently failing due to a known issue

2017-10-02 Thread Meghna Baijal
Hi Jason,
I did go through some of Beam’s source code but did not find any way to
overcome my current problem. Could you please point me in the right
direction?

Thanks,
Meghna Baijal

On Thu, Sep 28, 2017 at 10:32 AM, Daniel Pono Takamori 
wrote:

> Unfortunately we won't be able to enable all the Groovy methods for
> security reasons.  Fortunately the Beam team has found some work
> arounds for this so I'm cc'ing them to connect you to figure out how
> to get around this issue.
>
> Jason, if you could point the MXNet folks to your builds repo so they
> could take a look and ask questions, that would be great!
>
> On Thu, Sep 28, 2017 at 10:59 AM, Naveen Swamy  wrote:
> > Please revert this change until Apache Infra approve all the required
> > scripts? I don't think we should let the PR builds continue to fail this
> > long.
> >
> > On Wed, Sep 27, 2017 at 3:57 PM, Meghna Baijal <
> meghnabaijal2...@gmail.com>
> > wrote:
> >
> >> Hi All,
> >> This is just to let everyone know that PR #8034 is breaking the Apache
> >> MXNet PR builds for the moment. The master branch is not affected by
> this.
> >>
> >> This PR makes changes to the Jenkinsfile and some script approvals are
> >> required from the Apache infra team. I have opened a JIRA ticket for the
> >> same -https://issues.apache.org/jira/browse/INFRA-15176 <
> >> https://issues.apache.org/jira/browse/INFRA-15176> and we are in the
> >> process of resolving it.
> >>
> >> I will update this thread once the issue is fixed.
> >>
> >> Thanks,
> >> Meghna Baijal
> >>
> >>
>


Re: MXNet Slack Channel

2017-10-02 Thread Seb Kiureghian
invited

On Mon, Oct 2, 2017 at 3:00 PM, Joshua Arnold 
wrote:

> I would like to join the MXNet slack channel.
>
> Thanks,
> Josh
>


Re: Apache MXNet build failures are mostly valid - verify before merge

2017-10-02 Thread Gautam
Thanks All.

 I've created the JIRA to mark the protected branch for master
https://issues.apache.org/jira/browse/INCUBATOR-205.
We also need to add all the committers to be code owner as discussed in the
slack, I've opened a PR for it
https://github.com/apache/incubator-mxnet/pull/8128.

Good point Joern, I'll follow up on that.

Regards,
Gautam

On Fri, Sep 29, 2017 at 2:20 AM, Joern Kottmann  wrote:

> It also makes sense to block too old PRs from merging, because the
> test results are outdated and the build might fail after it gets
> merged.
>
> Jörn
>
> On Thu, Sep 28, 2017 at 9:14 PM, Zha, Sheng  wrote:
> > +1 on protected branch.
> >
> > Best regards,
> > -sz
> >
> > On 9/28/17, 11:48 AM, "Kumar, Gautam"  wrote:
> >
> > Hi Guys,
> >
> >  Let’s focus on specific issue here.
> >
> > Marking the master branch protected which involves “Only merge if
> checks has passed, and yes it will run the complete build”.
> >
> > We can’t afford to degrade the quality and keep debugging the build
> failure forever. If it’s slow down the development at the cost of quality I
> will vote for the quality.
> > We can work on improving the infrastructure to improve the overall
> speed.  If you have any specific concerns on availability of Jenkins please
> point out.
> >
> > -Gautam
> >
> >
> > On 9/28/17, 11:38 AM, "Chris Olivier"  wrote:
> >
> > -1000 on that. :)
> >
> > On Thu, Sep 28, 2017 at 11:33 AM Naveen Swamy <
> mnnav...@gmail.com> wrote:
> >
> > > PR->Sanity test/Linux build/test->reviewer/committer approves
> the
> > > change->Comment "Build Now" (Or trigger on at least one
> approval from a
> > > committer other than author)->*Full build-*>*passes
> build*->Enable Merge
> > >
> > > Let us take this particular topic to a separate thread or
> discuss offline
> > > if further clarification is needed.
> > >
> > > On Thu, Sep 28, 2017 at 11:24 AM, Chris Olivier <
> cjolivie...@gmail.com>
> > > wrote:
> > >
> > > > I understand the proposal.  How to trigger a build in that
> case?
> > > >
> > > >
> > > > On Thu, Sep 28, 2017 at 10:54 AM Madan Jampani <
> madan.jamp...@gmail.com>
> > > > wrote:
> > > >
> > > > > Chris,
> > > > > I don't think Naveen is suggesting that a merge happen
> without full
> > > > > verification i.e. all tests across all platforms pass.
> > > > > If a PR has some back and forth and results in multiple
> revisions
> > > (which
> > > > is
> > > > > arguably more common than a random unit test failing), we
> simply delay
> > > > full
> > > > > verification until the owner/reviewer have settled on a
> mutually
> > > > acceptable
> > > > > state.
> > > > >
> > > > > On Thu, Sep 28, 2017 at 10:25 AM, Chris Olivier <
> cjolivie...@gmail.com
> > > >
> > > > > wrote:
> > > > >
> > > > > > -1 for running only partial tests.  Most failing unit
> tests that get
> > > > > > through fail only for certain platforms/configurations.
> I personally
> > > > > > prefer to be assured the build and test is good before
> merge.  Most
> > > PR
> > > > > > merges aren't in a huge hurry.
> > > > > >
> > > > > > On Thu, Sep 28, 2017 at 9:54 AM, Naveen Swamy <
> mnnav...@gmail.com>
> > > > > wrote:
> > > > > >
> > > > > > > +1 to make it protected. Here is what I am thinking
> for PR builds
> > > > > > > on a PR run Sanity Tests + build/test one
> platform->committer
> > > reviews
> > > > > the
> > > > > > > code and issues "Build Now", a full build is
> run->Github checks
> > > that
> > > > > the
> > > > > > > full build checks succeed before it can be merged.
> > > > > > >
> > > > > > > I agree with Madan that PR should be approved by one
> another
> > > > committer.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Thu, Sep 28, 2017 at 9:37 AM, Madan Jampani <
> > > > > madan.jamp...@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > +1
> > > > > > > >
> > > > > > > > At a minimum I'd like to see the following two
> happen:
> > > > > > > > - Option to merge is disabled until all required
> checks pass.
> > > > > > > > - Code is reviewed and given +1 by at least one
> other committer
> > > (no
> > > > > > self
> > > > > > > > review).
> > > > > > > >
> > > > > > > > On Wed, Sep 27, 2017 at 11:15 PM, Gautam <
> gautamn...@gmail.com>
> > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi Chris,
> > > > 

Re: New Apache MXNet logo idea

2017-10-02 Thread Lupesko, Hagay
Like it!
It’s cute, fun and playful. For me it also associates with speed.

Hagay

On 9/28/17, 20:08, "Henri Yandell"  wrote:

Love :)

Lots of good connections here. Nice feather style/colour in the bunny's
silhouette, nice "magic" overlay for connection to Clarke's third law (any
sufficiently deep learning is indistinguishable from magic), great name
connection to 'mx' and the Chinese zodiac may, if appropriate, speak well
to MXNet's popularity in China (looking to those from/more versed in
China's culture for help there).

I do note that the zodiac rabbit's lucky colours are Apache's colours in
the feather.

Anyway - love it :)

Hen

On Thu, Sep 28, 2017 at 12:52 Seb Kiureghian  wrote:

>
> 
https://user-images.githubusercontent.com/591887/30987393-ba66e610-a44b-11e7-8226-da1711dcbdc5.jpg
>
>
>
> On Thu, Sep 28, 2017 at 12:38 PM, Madan Jampani 
> wrote:
>
> > Is there a picture of Max?
> >
> > On Thu, Sep 28, 2017 at 12:17 PM, Seb Kiureghian 
> > wrote:
> >
> >> Hi all,
> >>
> >> I have a new idea for a logo that I'd like to propose.
> >>
> >> The rabbit (I call him Max) is blazingly fast, like MXNet, but also
> >> friendly and approachable, like the Gluon interface.
> >>
> >> Do you all like it?
> >>
> >> Seb
> >>
> >
> >
>