Re: [Announcement] New Committer - Zach Kimberg

2019-05-09 Thread Hagay Lupesko
Congratulations Zach - well deserved!

On Thu, May 9, 2019, 13:26 Qing Lan  wrote:

> Hi All,
>
> Please join me in welcoming Zach Kimberg (https://github.com/zachgk) as a
> new committer.
>
> He has been solving some important bugs in MXNet JVM with respect to usage
> improvement, build issues and a lot more. He also created the Jenkins based
> publish pipeline for us to have standard way to build and test
> static-linked package conveniently for everyone in the community. Moreover,
> he solved a bunch of License problems we have in MXNet and brought several
> fixes to let us get 1.4.0 release on time.
>
> Thanks,
> Qing
>


Re: Fujitsu Breaks ImageNet Record using MXNet (under 75 sec)

2019-04-08 Thread Hagay Lupesko
Agreed!
I will mention this to my colleagues at Amazon that can help with that.

On Mon, Apr 8, 2019 at 1:32 PM Chaitanya Bapat  wrote:

> Yes. Moreover, we should be pushing it on our Twitter, Reddit, Medium, etc
> social channels.
>
> On Mon, 8 Apr 2019 at 15:55, Hagay Lupesko  wrote:
>
> > That's super cool Chai - thanks for sharing!
> > I also noticed that, and was seeing how we can reach out to the Fujitsu
> > guys so they can contribute back into MXNet...
> >
> > On Mon, Apr 8, 2019 at 10:14 AM Lin Yuan  wrote:
> >
> > > Chai,
> > >
> > > Thanks for sharing. This is awesome news!
> > >
> > > Lin
> > >
> > > On Mon, Apr 8, 2019 at 8:48 AM Chaitanya Bapat 
> > > wrote:
> > >
> > > > Greetings!
> > > >
> > > > Great start to a Monday morning, as I came across this news on Import
> > AI,
> > > > an AI newsletter.
> > > >
> > > > The newsletter talked about Apache MXNet, hence thought of sharing it
> > > with
> > > > our community. This seems to be a great achievement worth paying
> > > attention
> > > > to.
> > > >
> > > > *75 seconds: How long it takes to train a network against ImageNet:*
> > > > *...Fujitsu Research claims state-of-the-art ImageNet training
> > scheme...*
> > > > Researchers with Fujitsu Laboratories in Japan have further reduced
> the
> > > > time it takes to train large-scale, supervised learning AI models;
> > their
> > > > approach lets them train a residual network to around 75% accuracy on
> > the
> > > > ImageNet dataset after 74.7 seconds of training time. This is a big
> > leap
> > > > from where we were in 2017 (an hour), and is impressive relative to
> > > > late-2018 performance (around 4 minutes: see issue #121
> > > > <
> > > >
> > >
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=28edafc07a=0b77acb987
> > > > >
> > > > ).
> > > >
> > > > *How they did it: *The researchers trained their system across *2,048
> > > Tesla
> > > > V100 GPUs* via the Amazon-developed MXNet deep learning framework.
> They
> > > > used a large mini-batch size of 81,920, and also implemented
> layer-wise
> > > > adaptive scaling (LARS) and a 'warming up' period to increase
> learning
> > > > efficiency.
> > > >
> > > > *Why it matters:* Training large models on distributed infrastructure
> > is
> > > a
> > > > key component of modern AI research, and the reduction in time we've
> > seen
> > > > on ImageNet training is striking - I think this is emblematic of the
> > > > industrialization of AI, as people seek to create systematic
> approaches
> > > to
> > > > efficiently training models across large amounts of computers. This
> > trend
> > > > ultimately leads to a speedup in the rate of research reliant on
> > > > large-scale experimentation, and can unlock new paths of research.
> > > > *  Read more:* Yet Another Accelerated SGD: ResNet-50 Training on
> > > ImageNet
> > > > in 74.7 seconds (Arxiv)
> > > > <
> > > >
> > >
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=d2b13c879f=0b77acb987
> > > > >
> > > > .
> > > >
> > > > NVIDIA article -
> > > >
> > > >
> > >
> >
> https://news.developer.nvidia.com/fujitsu-breaks-imagenet-record-with-v100-tensor-core-gpus/
> > > >
> > > > Hope that gives further impetus to strive harder!
> > > > Have a good week!
> > > > Chai
> > > >
> > > >  --
> > > > *Chaitanya Prakash Bapat*
> > > > *+1 (973) 953-6299*
> > > >
> > > > [image: https://www.linkedin.com//in/chaibapat25]
> > > > <https://github.com/ChaiBapchya>[image:
> > > https://www.facebook.com/chaibapat
> > > > ]
> > > > <https://www.facebook.com/chaibapchya>[image:
> > > > https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya
> > > >[image:
> > > > https://www.linkedin.com//in/chaibapat25]
> > > > <https://www.linkedin.com//in/chaibapchya/>
> > > >
> > >
> >
>
>
> --
> *Chaitanya Prakash Bapat*
> *+1 (973) 953-6299*
>
> [image: https://www.linkedin.com//in/chaibapat25]
> <https://github.com/ChaiBapchya>[image: https://www.facebook.com/chaibapat
> ]
> <https://www.facebook.com/chaibapchya>[image:
> https://twitter.com/ChaiBapchya] <https://twitter.com/ChaiBapchya>[image:
> https://www.linkedin.com//in/chaibapat25]
> <https://www.linkedin.com//in/chaibapchya/>
>


Re: MXNet 1.4.1 Release Proposal

2019-04-08 Thread Hagay Lupesko
Awesome - thanks Junru and Sheng!
I have updated the CWiki to reflect you being the release manager and
shepherd.

Junru - I suggest we give the community a week more to add critical fix
proposals, before we set a timeline. Please feel free to drive this
forward, and I'm happy to help as needed.

Thanks everyone,
Hagay

On Thu, Apr 4, 2019 at 2:27 PM Sheng Zha  wrote:

> Thanks Hagay for proposing the release and for Junru to volunteer to drive
> the release. I will help Junru as the committer for this release.
>
> -sz
>
> On Thu, Apr 4, 2019 at 2:18 PM Junru Shao  wrote:
>
> > Hi Hagay,
> >
> > I have some experiences in MXNet development, and would love to volunteer
> > for driving this release.
> >
> > Thank you so much!
> >
> > Best,
> > Junru
> >
> > On Thu, Apr 4, 2019 at 1:51 PM Hagay Lupesko  wrote:
> >
> > > Hello MXNet community,
> > >
> > > As previously discussed in [0
> > > <
> > >
> >
> https://lists.apache.org/thread.html/a5f444999bf428d06e691b1856392ae5ebb24a3485eaa484a73de10d@%3Cdev.mxnet.apache.org%3E
> > > >],
> > > and per the feedback from Pedro, Kellen and Sheng, I'd like to propose
> > > releasing MXNet 1.4.1.
> > > MXNet 1.4.1 is a patch release on top of 1.4.0 (following semver[1
> > > <https://semver.org/>]), that includes backwards compatible bug fixes
> -
> > a
> > > couple I am aware of are mem leaks in Scala API, Gluon RNN and
> NDArrays.
> > >
> > > I went ahead and created a draft release page on CWiki [2
> > > <
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> > > >],
> > > thanks to Yuxi Hu for adding a mem leak fix, and thanks to Andrew
> Ayres,
> > > Qing Lan and Sergey Sokolov for fixing bugs in 1.4.0 - I went ahead and
> > > added your fixes to the list.
> > >
> > > Asking the community to:
> > > (1) Any bug fix or regression you identified and fixed after 1.4.0
> > release?
> > > please add it to the release proposal wiki (or msg me on Slack if you
> > don't
> > > have write access, happy to do it).
> > > (2) Any comments or suggestions on the release wiki? please leave
> > comments
> > > on the wiki or reply to this email.
> > > (3) I am looking for volunteers to drive the release - ideally we'll
> have
> > > two volunteers: a non-committer and a shepherd committer that can also
> > help
> > > with the logistics that require permissions. This is a great way to
> > > contribute to the community and help MXNet!
> > >
> > > I plan to check-in in a few days and finalize the proposal, so timely
> > > response is appreciated.
> > >
> > > Cheers,
> > > Hagay
> > >
> > > [0]
> > >
> > >
> >
> https://lists.apache.org/thread.html/a5f444999bf428d06e691b1856392ae5ebb24a3485eaa484a73de10d@%3Cdev.mxnet.apache.org%3E
> > > [1] https://semver.org/
> > > [2]
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> > >
> >
>


Re: Fujitsu Breaks ImageNet Record using MXNet (under 75 sec)

2019-04-08 Thread Hagay Lupesko
That's super cool Chai - thanks for sharing!
I also noticed that, and was seeing how we can reach out to the Fujitsu
guys so they can contribute back into MXNet...

On Mon, Apr 8, 2019 at 10:14 AM Lin Yuan  wrote:

> Chai,
>
> Thanks for sharing. This is awesome news!
>
> Lin
>
> On Mon, Apr 8, 2019 at 8:48 AM Chaitanya Bapat 
> wrote:
>
> > Greetings!
> >
> > Great start to a Monday morning, as I came across this news on Import AI,
> > an AI newsletter.
> >
> > The newsletter talked about Apache MXNet, hence thought of sharing it
> with
> > our community. This seems to be a great achievement worth paying
> attention
> > to.
> >
> > *75 seconds: How long it takes to train a network against ImageNet:*
> > *...Fujitsu Research claims state-of-the-art ImageNet training scheme...*
> > Researchers with Fujitsu Laboratories in Japan have further reduced the
> > time it takes to train large-scale, supervised learning AI models; their
> > approach lets them train a residual network to around 75% accuracy on the
> > ImageNet dataset after 74.7 seconds of training time. This is a big leap
> > from where we were in 2017 (an hour), and is impressive relative to
> > late-2018 performance (around 4 minutes: see issue #121
> > <
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=28edafc07a=0b77acb987
> > >
> > ).
> >
> > *How they did it: *The researchers trained their system across *2,048
> Tesla
> > V100 GPUs* via the Amazon-developed MXNet deep learning framework. They
> > used a large mini-batch size of 81,920, and also implemented layer-wise
> > adaptive scaling (LARS) and a 'warming up' period to increase learning
> > efficiency.
> >
> > *Why it matters:* Training large models on distributed infrastructure is
> a
> > key component of modern AI research, and the reduction in time we've seen
> > on ImageNet training is striking - I think this is emblematic of the
> > industrialization of AI, as people seek to create systematic approaches
> to
> > efficiently training models across large amounts of computers. This trend
> > ultimately leads to a speedup in the rate of research reliant on
> > large-scale experimentation, and can unlock new paths of research.
> > *  Read more:* Yet Another Accelerated SGD: ResNet-50 Training on
> ImageNet
> > in 74.7 seconds (Arxiv)
> > <
> >
> https://twitter.us13.list-manage.com/track/click?u=67bd06787e84d73db24fb0aa5=d2b13c879f=0b77acb987
> > >
> > .
> >
> > NVIDIA article -
> >
> >
> https://news.developer.nvidia.com/fujitsu-breaks-imagenet-record-with-v100-tensor-core-gpus/
> >
> > Hope that gives further impetus to strive harder!
> > Have a good week!
> > Chai
> >
> >  --
> > *Chaitanya Prakash Bapat*
> > *+1 (973) 953-6299*
> >
> > [image: https://www.linkedin.com//in/chaibapat25]
> > [image:
> https://www.facebook.com/chaibapat
> > ]
> > [image:
> > https://twitter.com/ChaiBapchya]  >[image:
> > https://www.linkedin.com//in/chaibapat25]
> > 
> >
>


MXNet 1.4.1 Release Proposal

2019-04-04 Thread Hagay Lupesko
Hello MXNet community,

As previously discussed in [0
],
and per the feedback from Pedro, Kellen and Sheng, I'd like to propose
releasing MXNet 1.4.1.
MXNet 1.4.1 is a patch release on top of 1.4.0 (following semver[1
]), that includes backwards compatible bug fixes - a
couple I am aware of are mem leaks in Scala API, Gluon RNN and NDArrays.

I went ahead and created a draft release page on CWiki [2
],
thanks to Yuxi Hu for adding a mem leak fix, and thanks to Andrew Ayres,
Qing Lan and Sergey Sokolov for fixing bugs in 1.4.0 - I went ahead and
added your fixes to the list.

Asking the community to:
(1) Any bug fix or regression you identified and fixed after 1.4.0 release?
please add it to the release proposal wiki (or msg me on Slack if you don't
have write access, happy to do it).
(2) Any comments or suggestions on the release wiki? please leave comments
on the wiki or reply to this email.
(3) I am looking for volunteers to drive the release - ideally we'll have
two volunteers: a non-committer and a shepherd committer that can also help
with the logistics that require permissions. This is a great way to
contribute to the community and help MXNet!

I plan to check-in in a few days and finalize the proposal, so timely
response is appreciated.

Cheers,
Hagay

[0]
https://lists.apache.org/thread.html/a5f444999bf428d06e691b1856392ae5ebb24a3485eaa484a73de10d@%3Cdev.mxnet.apache.org%3E
[1] https://semver.org/
[2]
https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status


Re: Discussing plans for next MXNet releases

2019-04-04 Thread Hagay Lupesko
Thanks Kellen, Pedro and Sheng for the feedback.

Kellen -
- Thanks for proposing 1.5 features. Kindly note them on the issue Sheng
created: https://github.com/apache/incubator-mxnet/issues/14619
- For your 2.0 proposals - can you please update them in the issue Sheng
created: https://github.com/apache/incubator-mxnet/issues/9686
Pedro -
- Thank you for volunteering for 2.0 release!
- Thanks for referencing the Issue that tracks 2.0 API updates, looks like
Sheng updated it to track the broader 2.0 features.
Sheng -
- Agreed that each release should be managed separately, my intention was
to kick start thinking for MXNet short term and long term roadmap - we can
fork at this point.
- Thanks for creating the issues for 1.5 and 2.0 - the community can start
surfacing proposals there.
- Agreed that 1.4.1 should include fixes, not features. I'll start a
separate thread on that.

As discussed - we will have separate threads for each of the releases, and
I will start with 1.4.1

Cheers,
Hagay


On Tue, Apr 2, 2019 at 6:39 PM Sheng Zha  wrote:

> Hi Hagay,
>
> Thanks for taking the initiative. The proposed scope in this thread is in
> my opinion too large to fit in a single thread, so I'd suggest that we
> start separate threads for each individual release item. To elaborate on
> the reasons based on each individual item:
> - For 1.4.1 which is in the wiki page draft, I'd suggest refraining from
> adding new features there since patch release should be about bug fixes.
> - For 1.5, there are efforts such as AMP and general improvement for fp16
> support in operators, quantization efforts, etc., that should be included.
> I may have a bit more context on this so I'm happy to help initiate the
> discussion.
> - For 2.0, I think it would be more of a roadmap discussion at this stage.
>
> I hope this makes sense. Would you mind starting a thread focusing on 1.4.1
> patch release?
>
> -sz
>
>
> On Tue, Apr 2, 2019 at 5:06 PM Hagay Lupesko  wrote:
>
> > Dear MXNet community,
> >
> > I wanted to initiate a discussion about the plan and scope for the next
> > MXNet releases.
> > I suggest we focus on three releases, and get the process going in
> > parallel:
> > (1) 1.4.1 - patch release on top of 1.4.0 to address some perf
> regressions
> > and memory leaks I am aware of, such as the memory leak fixed on Scala [0
> > <https://github.com/apache/incubator-mxnet/pull/14586>]. I went ahead
> and
> > created a draft release proposal wiki [1
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> > >
> > ].
> > (2) 1.5.0 - a minor release to add new features introduced since 1.4.0
> > release started (back in Nov 2018!), such as various performance
> > improvements: aggregate SGD, in-place updates in optimizers, gpu support
> > for image processing operators and many more features useful for MXNet’s
> > users.
> > (3) 2.0 - an exciting major release that will include major enhancements
> to
> > MXNet.
> >
> > Timeframes will probably vary based on the scope. I think we should plan
> to
> > start 1.4.1 release within a couple of weeks, 1.5.0 should target
> starting
> > once we release 1.4.1, and 2.0 timeline is TBD - but such a major release
> > will require more time to discuss and decide in the community.
> >
> > I was thinking to get started through:
> > (1) Draft proposals on CWiki, where the community can add content and
> > propose scope and features.
> > (2) Setup online meetings, where anyone can dial into, from anywhere,
> where
> > we will have a chance to discuss in voice+video.
> > (3) With (1)+(2) have a scope and timeline that the community, in large,
> > supports.
> >
> > Would be great to get the community's feedback and suggestions, and
> please
> > reply if you would like to be involved in the effort of supporting the
> > releases!
> >
> > MXNet is awesome, looking forward to working together to make it even
> > better!
> > Hagay
> >
> > [0] https://github.com/apache/incubator-mxnet/pull/14586
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status
> >
>


Re: Requesting slack access

2019-04-04 Thread Hagay Lupesko
Hi Xiuquan,

Slack invite sent - welcome to the MXNet community!
Please slack me @Hagay Lupesko - would love to chat about how you guys are
thinking about using MXNet.

Hagay

On Thu, Apr 4, 2019 at 1:24 AM Xiuquan Lv  wrote:

> Dear MXNet community,
>
>
>
>
> Please join me in the MXNet Slack Community.
>
>
>
>
> Thanks
>
> Xiuquan Lv


Re: Podling Report Reminder - April 2019

2019-04-03 Thread Hagay Lupesko
Sounds good Sheng, thanks!

On Tue, Apr 2, 2019, 17:26 Sheng Zha  wrote:

> Thanks for the reminder. I’m working on it and will post the draft back to
> the list, and would appreciate feedback from the community by then.
>
> -sz
>
> > On Apr 2, 2019, at 5:23 PM, Tianqi Chen 
> wrote:
> >
> > It would be great if the PPMC coordinate and prepare the report
> >
> >> On Tue, Apr 2, 2019 at 4:00 PM Hagay Lupesko  wrote:
> >>
> >> Is anyone working on the podling report?
> >> I'm happy to take care of that if no one else is planning to do it.
> >>
> >> Cheers,
> >> Hagay
> >>
> >>> On Fri, Mar 29, 2019 at 4:06 PM  wrote:
> >>>
> >>> Dear podling,
> >>>
> >>> This email was sent by an automated system on behalf of the Apache
> >>> Incubator PMC. It is an initial reminder to give you plenty of time to
> >>> prepare your quarterly board report.
> >>>
> >>> The board meeting is scheduled for Wed, 17 April 2019, 10:30 am PDT.
> >>> The report for your podling will form a part of the Incubator PMC
> >>> report. The Incubator PMC requires your report to be submitted 2 weeks
> >>> before the board meeting, to allow sufficient time for review and
> >>> submission (Wed, April 03).
> >>>
> >>> Please submit your report with sufficient time to allow the Incubator
> >>> PMC, and subsequently board members to review and digest. Again, the
> >>> very latest you should submit your report is 2 weeks prior to the board
> >>> meeting.
> >>>
> >>> Candidate names should not be made public before people are actually
> >>> elected, so please do not include the names of potential committers or
> >>> PPMC members in your report.
> >>>
> >>> Thanks,
> >>>
> >>> The Apache Incubator PMC
> >>>
> >>> Submitting your Report
> >>>
> >>> --
> >>>
> >>> Your report should contain the following:
> >>>
> >>> *   Your project name
> >>> *   A brief description of your project, which assumes no knowledge of
> >>>the project or necessarily of its field
> >>> *   A list of the three most important issues to address in the move
> >>>towards graduation.
> >>> *   Any issues that the Incubator PMC or ASF Board might wish/need to
> be
> >>>aware of
> >>> *   How has the community developed since the last report
> >>> *   How has the project developed since the last report.
> >>> *   How does the podling rate their own maturity.
> >>>
> >>> This should be appended to the Incubator Wiki page at:
> >>>
> >>> https://wiki.apache.org/incubator/April2019
> >>>
> >>> Note: This is manually populated. You may need to wait a little before
> >>> this page is created from a template.
> >>>
> >>> Mentors
> >>> ---
> >>>
> >>> Mentors should review reports for their project(s) and sign them off on
> >>> the Incubator wiki page. Signing off reports shows that you are
> >>> following the project - projects that are not signed may raise alarms
> >>> for the Incubator PMC.
> >>>
> >>> Incubator PMC
> >>>
> >>
>


Discussing plans for next MXNet releases

2019-04-02 Thread Hagay Lupesko
Dear MXNet community,

I wanted to initiate a discussion about the plan and scope for the next
MXNet releases.
I suggest we focus on three releases, and get the process going in parallel:
(1) 1.4.1 - patch release on top of 1.4.0 to address some perf regressions
and memory leaks I am aware of, such as the memory leak fixed on Scala [0
]. I went ahead and
created a draft release proposal wiki [1

].
(2) 1.5.0 - a minor release to add new features introduced since 1.4.0
release started (back in Nov 2018!), such as various performance
improvements: aggregate SGD, in-place updates in optimizers, gpu support
for image processing operators and many more features useful for MXNet’s
users.
(3) 2.0 - an exciting major release that will include major enhancements to
MXNet.

Timeframes will probably vary based on the scope. I think we should plan to
start 1.4.1 release within a couple of weeks, 1.5.0 should target starting
once we release 1.4.1, and 2.0 timeline is TBD - but such a major release
will require more time to discuss and decide in the community.

I was thinking to get started through:
(1) Draft proposals on CWiki, where the community can add content and
propose scope and features.
(2) Setup online meetings, where anyone can dial into, from anywhere, where
we will have a chance to discuss in voice+video.
(3) With (1)+(2) have a scope and timeline that the community, in large,
supports.

Would be great to get the community's feedback and suggestions, and please
reply if you would like to be involved in the effort of supporting the
releases!

MXNet is awesome, looking forward to working together to make it even
better!
Hagay

[0] https://github.com/apache/incubator-mxnet/pull/14586
[1]
https://cwiki.apache.org/confluence/display/MXNET/%5BDRAFT+PROPOSAL%5D+Apache+MXNet+%28incubating%29+1.4.1+Release+Plan+and+Status


Re: Podling Report Reminder - April 2019

2019-04-02 Thread Hagay Lupesko
Is anyone working on the podling report?
I'm happy to take care of that if no one else is planning to do it.

Cheers,
Hagay

On Fri, Mar 29, 2019 at 4:06 PM  wrote:

> Dear podling,
>
> This email was sent by an automated system on behalf of the Apache
> Incubator PMC. It is an initial reminder to give you plenty of time to
> prepare your quarterly board report.
>
> The board meeting is scheduled for Wed, 17 April 2019, 10:30 am PDT.
> The report for your podling will form a part of the Incubator PMC
> report. The Incubator PMC requires your report to be submitted 2 weeks
> before the board meeting, to allow sufficient time for review and
> submission (Wed, April 03).
>
> Please submit your report with sufficient time to allow the Incubator
> PMC, and subsequently board members to review and digest. Again, the
> very latest you should submit your report is 2 weeks prior to the board
> meeting.
>
> Candidate names should not be made public before people are actually
> elected, so please do not include the names of potential committers or
> PPMC members in your report.
>
> Thanks,
>
> The Apache Incubator PMC
>
> Submitting your Report
>
> --
>
> Your report should contain the following:
>
> *   Your project name
> *   A brief description of your project, which assumes no knowledge of
> the project or necessarily of its field
> *   A list of the three most important issues to address in the move
> towards graduation.
> *   Any issues that the Incubator PMC or ASF Board might wish/need to be
> aware of
> *   How has the community developed since the last report
> *   How has the project developed since the last report.
> *   How does the podling rate their own maturity.
>
> This should be appended to the Incubator Wiki page at:
>
> https://wiki.apache.org/incubator/April2019
>
> Note: This is manually populated. You may need to wait a little before
> this page is created from a template.
>
> Mentors
> ---
>
> Mentors should review reports for their project(s) and sign them off on
> the Incubator wiki page. Signing off reports shows that you are
> following the project - projects that are not signed may raise alarms
> for the Incubator PMC.
>
> Incubator PMC
>


Re: [Announcement] New Committer - Patric Zhao

2019-03-17 Thread Hagay Lupesko
Congrats Patric!

On Fri, Mar 15, 2019 at 7:49 AM Joshua Z. Zhang 
wrote:

>
>
>
>  Congrats Patrick!
>
>
>
>
>
>  Zhi
>
> >
> > On Mar 15, 2019 at 10:46 PM,   marco.g.ab...@gmail.com)>  wrote:
> >
> >
> >
> >  Congratulations, great to have you on board!
> >
> > -Marco
> >
> > Lv, Tao Aschrieb am Fr., 15. März 2019, 15:38:
> >
> > >  Wow, congratulation Patric!
> > >
> > >  -Original Message-
> > >  From: Steffen Rochel [mailto:steffenroc...@gmail.com]
> > >  Sent: Friday, March 15, 2019 10:25 PM
> > >  To: dev@mxnet.incubator.apache.org
> > >  Cc: patric zhao  
> > >  Subject: Re: [Announcement] New Committer - Patric Zhao
> > >
> > >  Congratulation Patrick!
> > >  Steffen
> > >
> > >  On Fri, Mar 15, 2019 at 5:38 AM Zhao, Patric  
>
> > >  wrote:
> > >
> > >   >  I am very glad to have this opportunity to contribute to the
> > >   >  Apache/MXNet community :)
> > >   >
> > >   >  Thanks all of the supports from the community and Intel.
> > >   >
> > >   >  BR,
> > >   >
> > >   >  --Patric
> > >   >
> > >   >
> > >   >   >  -Original Message-
> > >   >   >  From: MiraiWK WKCN [mailto:w...@live.cn]
> > >   >   >  Sent: Friday, March 15, 2019 12:52 AM
> > >   >   >  To: dev@mxnet.incubator.apache.org; patric zhao
> > >   >   >   
> > >   >   >  Subject: Re: [Announcement] New Committer - Patric Zhao
> > >   >   >
> > >   >   >  Welcome Peng Zhao!
> > >   >   >  Peng is the AI Tech Leader in Intel Corporation. We have
> good
> > >   >   >  cooperation before. He is very professional and contribute a
> lot to
> > >   >   >  MXNet,
> > >   >  especially deep
> > >   >   >  learning boost on CPU.
> > >   >   >
> > >   >   >  
> > >   >   >  From: Anirudh Subramanian  
> > >   >   >  Sent: Thursday, March 14, 2019 3:54:50 PM
> > >   >   >  To: dev@mxnet.incubator.apache.org; patric zhao
> > >   >   >  Subject: [Announcement] New Committer - Patric Zhao
> > >   >   >
> > >   >   >  Hi all,
> > >   >   >
> > >   >   >  Please join me to welcome Patric Zhao as a new committer of
> Apache
> > >   >   >  (incubating) MXNet!
> > >   >   >
> > >   >   >  Patric has put in great effort around MKLDNN integration into
> MXNet
> > >   >   >  and
> > >   >  has
> > >   >   >  been involved in features like quantization, graph fusion and
> fused
> > >   >   >  RNN operators for CPU.
> > >   >   >
> > >   >   >  Dev List activity:
> > >   >   >
> > >   >
> https://lists.apache.org/list.html?d...@mxnet.apache.org:lte=3y:patric.
> > >   >  zhao
> > >   >   >
> > >   >   >  Issues:
> > >   >   >  https://github.com/apache/incubator-
> > >   >   >
> mxnet/issues?utf8=%E2%9C%93=is%3Aissue+involves%3Apengzhao-intel+
> > >   >   >
> > >   >   >  PR Reviews:
> > >   >   >  https://github.com/apache/incubator-
> > >   >   >
> mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Apengzhao-intel
> > >   >   >
> > >   >   >  Proposals involved in:
> > >   >   >
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optimi
> > >   >   >  z
> > >   >   >  ation+and+Quantization+based+on+subgraph+and+MKL-DNN
> > >   >   >
> https://cwiki.apache.org/confluence/display/MXNET/Fused+RNN+Operator
> > >   >   >  s
> > >   >   >  +for+CPU
> > >   >   >   <
> https://cwiki.apache.org/confluence/display/MXNET/MXNet+Graph+Optim
> > >   >   >  i
> > >   >   >  zation+and+Quantization+based+on+subgraph+and+MKL-DNN>
> > >   >   >
> > >   >   >
> > >   >   >  Thanks,
> > >   >   >  Anirudh
> > >   >
> > >
> >


Re: Gluon fit API- Design proposal

2019-02-10 Thread Hagay Lupesko
Wanted to chime in as well.
I have reviewed the design shared in the mail offline with Ankit, Lai and
Naveen (we work in the same team in Amazon).

I think it does a good job at simplifying many low-complexity training use
cases, which can make MXNet/Gluon even more friendly to so-called "deep
learning beginners" - so +1 on the proposal!

Hagay

On Fri, Feb 8, 2019 at 10:30 AM Naveen Swamy  wrote:

> Hi Alfredo,
> Thanks for your comments, I really like all your suggestions. Here are my
> answers let me know if it makes sense or have comments.
>
> 1) The fit API is targeting novice users covering about 80% of the use
> cases listed in the document. For advanced users,
> and complex models, we (Naveen, Ankit and Lai) felt its best use the
> existing mechanisms due to the imperative nature and the more control it
> can give, So we did not duplicate the save/load functionality in the Hybrid
> block.
> We’ll consider and extend the functionality to Estimator.
> I have had trouble using pickle package which is commonly used for
> serialization and deserialization, if you have any other suggestions from
> your experience please let us know.
>
> 2) +1, we’ll add this to our backlog and add it in our next iteration.
>
> 3) Can you expand a little more on this, how it helps in a production
> environment (which this API was not target for) ?.
> I’ll check the TF Estimator to understand further.
>
> Thanks, Naveen
>
>
> On Thu, Feb 7, 2019 at 2:32 PM Alfredo Luque
>  wrote:
>
> > This is great and something we should all be able to benefit from.
> >
> > There are just three pieces I’d like to advocate for that I feel are
> > shortcomings of some competing APIs on other frameworks (eg; TF
> Estimators)
> > and I would love to see in this proposal:
> >
> > 1) Make serialization/deserialization of these classifiers/regressors
> easy
> > or at least ensure the internal members of the wrapper are easy to
> > save/load. We’ve hacked around this by only allowing hybrid blocks which
> > have easy save/load functionality, but having a simple
> > “save_model”/“load_model” function as a 1st class citizen of these
> proposed
> > APIs will lead to a vastly improved user experience down the road.
> >
> > 2) Allowing the fit/predict/predict_proba functions to take in both data
> > loaders and simple numpy arrays and pandas dataframes is a simple change
> > but a huge usability improvement. Power users and library authors will
> > appreciate being able to use custom data loaders but a large portion of
> end
> > users want to just pass an ndarray or data frame and get some results
> > quickly.
> >
> > 3) Allow lazy construction of the model. This is something I feel TF
> > Estimators do well: by allowing the user to pass a function that
> constructs
> > the net (i.e a model_fn that returns the net) rather than the net itself
> it
> > allows for more control at runtime and usage of these APIs in a
> production
> > environment.
> >
> > Would love your thoughts on these three changes/additions.
> >
> > —Alfredo Luque
> > Software Engineer
> > Machine Learning Infrastructure
> > Airbnb
> > San Francisco, CA
> >
> > On February 7, 2019 at 1:51:17 PM, Ankit Khedia (khedia.an...@gmail.com)
> > wrote:
> >
> > Hello dev@,
> >
> > Training a model in Gluon requires users to write the training loop, this
> > is useful because of its imperative nature, however repeating the same
> code
> > across multiple models can become tedious and repetitive with boilerplate
> > code. The training loop can also be overwhelming to some users new to
> deep
> > learning. Users have asked in [1] for a simple Fit API, similar to APIs
> > available in SKLearn and Keras as a way to simplify model training and
> > reduce boilerplate code and complexity.
> >
> > So, I along with other contributor Naveen and Lai came up with a fit API
> > proposal in [2] that covers 80% of the use-cases for beginners, the fit
> API
> > does not replace the gluon training loops. The API proposal is inspired
> by
> > the Keras fit API. I have discussed and got feedback from a few MXNet
> > contributors (Sheng, Mu, Aston, Zhi) close by and I am writing to ask for
> > the community’s feedback on the API proposal.
> >
> >
> >
> > [1]
> >
> https://discuss.mxnet.io/t/wrapping-gluon-into-scikit-learn-like-api/2112
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Gluon+Fit+API+-+Tech+Design
> >
> >
> > Thanks,
> > Ankit
> >
> >
> > —
> > Alfredo Luque
> > Software Engineer
> > Machine Learning Infrastructure
> > Airbnb
> > San Francisco, CA
> >
>


Re: [DISCUSS] Current Publish problems

2019-01-28 Thread Hagay Lupesko
Good stuff Zach and Qing!
Great feedback Edison - would be great if you leave it in the wiki, so that
it is saved in the context of the doc with other feedback, such as Kellen's.

On Mon, Jan 28, 2019 at 1:58 AM edisongust...@gmail.com <
edisongust...@gmail.com> wrote:

> Hello all,
>
> First let me introduce myself:
>
> My name is Edison Gustavo Muenz. I have worked most of my career with C++,
> Windows and Linux. I am a big fan of machine learning and now I joined
> Amazon in Berlin to work on MXNet.
>
> I would like to give some comments on the document posted:
>
> # change publish OS (Severe)
>
> As a rule of thumb, when providing your own binaries on linux, we should
> always try to compile with oldest glibc possible. Using CentOS7 for this
> regard (if possible due to the CUDA issues) is the way to go.
>
> # Using Cent OS 7
>
> > However, all of the current GPU build scripts would be unavailable since
> nvidia does not provide the corresponding packages for rpm. In this case,
> we may need to go with NVIDIA Docker for Cent OS 7 and that only provide a
> limited versions of CUDA.
>
> > List of CUDA that NVIDIA supporting for Cent OS 7:
> > CUDA 10, 9.2, 9.1, 9.0, 8.0, 7.5
>
> From what I saw in the link provided (
> https://hub.docker.com/r/nvidia/cuda/), this list of versions is even
> bigger than the list of versions supported on Ubuntu 16.04.
>
> What am I missing?
>
> > Another problem we may see is the performance and stability difference
> on the backend we built since we downgrade libc from 2.19 to 2.17
>
> I would like to first give a brief intro so that we're all on the same
> page. If you already know how libc versioning works, then you can skip this
> part
>
> ## Brief intro on how libc versioning works
>
> In libc each symbol provided by libc has 2 components:
> - symbol name
> - version
>
> This can be seen with:
>
> ```
> $ objdump -T /lib/x86_64-linux-gnu/libc.so.6 | grep memcpy
> 000bd4a0  w   DF .text  0009  GLIBC_2.2.5 wmemcpy
> 001332f0 gDF .text  0019  GLIBC_2.4   __wmemcpy_chk
> 0009f0e0 g   iD  .text  00ca  GLIBC_2.14  memcpy
> 000bb460 gDF .text  0028 (GLIBC_2.2.5) memcpy
> 001318a0 g   iD  .text  00ca  GLIBC_2.3.4 __memcpy_chk
> ```
>
> So it can be seen that there are different memory addresses for each
> version of memcpy.
>
> When linking a binary, the linker will always choose the most recent
> version of the libc symbol.
>
> An example:
> - your program uses the `memcpy` symbol
> - when linking, the linker will choose `memcpy` at version 2.14
> (latest)
>
> When executing the binary then the libc provided on your system must have
> a memcpy at version 2.14, otherwise you get the following error:
>
> /lib/x86_64-linux-gnu/libm.so.6: version `libc_2.23' not found
> (required by /tmp/mxnet6145590735071079280/libmxnet.so)
>
> Also, a symbol has its version increased when there are breaking changes.
> So, libc will only increase the version of a symbol if any of its
> inputs/outputs changed in a non-compatible way (eg.: Changing the type of a
> field to a non-compatible type, like int -> short).
>
> ## Performance difference between versions 2.17 and 2.19
>
> This website is really handy for this:
> https://abi-laboratory.pro/?view=timeline=glibc
>
> If we look at the links:
>
> -
> https://abi-laboratory.pro/index.php?view=objects_report=glibc=2.18=2.19
> -
> https://abi-laboratory.pro/index.php?view=objects_report=glibc=2.17=2.18
>
> You can see that their binary compatibility is fine since no significant
> changes were made between these versions that could compromise the
> performance.
>
> Finally, I want to thank everyone for letting me part of this community.
>
> On 2019/01/23 21:48:48, kellen sunderland 
> wrote:
> > Hey Qing, thanks for the summary and to everyone for automating the
> > deployment process.  I've left a few comments on the doc.
> >
> > On Wed, Jan 23, 2019 at 11:46 AM Qing Lan  wrote:
> >
> > > Hi all,
> > >
> > > Recently Zach announced the availability for MXNet Maven publishing
> > > pipeline and general static-build instructions. In order to make it
> better,
> > > I drafted a document that includes the problems we have for this
> pipeline:
> > >
> https://cwiki.apache.org/confluence/display/MXNET/Outstanding+problems+with+publishing
> .
> > > Some of them may need to be addressed very soon.
> > >
> > > Please kindly review and leave any comments you may have in this
> thread or
> > > in the document.
> > >
> > > thanks,
> > > Qing
> > >
> > >
> >
>


Re: mxnet slack, contributor

2019-01-27 Thread Hagay Lupesko
Just sent you an invite!
Or you can just follow this link:
https://join.slack.com/t/the-asf/shared_invite/enQtNDQ3OTEwNzE1MDg5LWY2NjkwMTEzMGI2ZTI1NzUzMDk0MzJmMWM1NWVmODg0MzBjNjAxYzUwMjIwNDI3MjlhZWRjNmNhOTM5NmIxNDk


Welcome!

On Sun, Jan 27, 2019 at 10:26 PM Zach Boldyga  wrote:

> hello, I'm contributing to mxnet and would like to join the slack channel.
> thanks!
>
> Zach Boldyga
> Scalabull  |  Founder
> 1 (866) 846-8771 x 101
>


Re: [Announcement] New Committer - Roshani Nagmote

2019-01-10 Thread Hagay Lupesko
Congrats and so much well deserved :)

On Tue, Jan 8, 2019 at 9:20 AM Steffen Rochel 
wrote:

> Congratulation Roshani!
>
> On Tue, Jan 8, 2019 at 8:53 AM Qing Lan  wrote:
>
> > Congrats Roshani! Great to have you here!
> >
> > -Qing
> >
> > >
> > >
> > > Congrats Roshani.  Well deserved.
> > >
> > >> On Tue, Jan 8, 2019, 8:29 AM Marco de Abreu  > wrote:
> > >>
> > >> Great to have you on board, Roshani!
> > >>
> > >> -Marco
> > >>
> > >> Am Di., 8. Jan. 2019, 15:18 hat Carin Meier 
> > >> geschrieben:
> > >>
> > >>> Please join me in welcoming Roshani Nagmote as a new committer.
> > >>>
> > >>> She has been active in the project for quite some time. She has
> managed
> > >> the
> > >>> 1.3.0 release as well as being involved various features including
> the
> > >> Java
> > >>> API and ONNX operators.
> > >>>
> > >>> We are excited to have her onboard as a committer.
> > >>>
> > >>> Github Activity
> > >>>
> > >>>
> > >>
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+involves%3ARoshrini
> > >>> +
> > >>>
> > >>> Confluence
> > >>>
> > >>>
> > >>
> >
> https://cwiki.apache.org/confluence/users/viewuserprofile.action?username=roshrini
> > >>>
> > >>
> >
>


Re: Tensorflow contrib loss functions ported to mxnet

2019-01-10 Thread Hagay Lupesko
Istavan,

This sounds useful to me, and would encourage you to contribute it.
It's always helpful if you describe one or two use cases for the
contribution (e.g. models/problems where these loss functions are useful) -
this stimulates interest.

Can you share the link to the PR that got zero attention? I'm happy to help.

Hagay

On Thu, Jan 3, 2019 at 9:52 AM István Fehérvári  wrote:

> Hello developers,
>
>
>
> I am working on metric learning currently and so I ported all tensorflow
> metric learning loss functions to mxnet (gluon) (
>
> https://www.tensorflow.org/api_docs/python/tf/contrib/losses/metric_learning
> ).
>
> Before I put some work into making it PR-able, is there any interest in an
> mxnet (contrib) loss module or I should not bother trying to merge it?
>
>
>
> The reason I ask is that I see that we have a LOT of open PRs that get 0
> attention and I do not want to invest time into a feature that will never
> make it. E.g. my last PR on a new operator was opened ~3 weeks ago and
> still no review or comment.
>
>
>
> Thanks a lot,
>
> Istvan
>


Re: [Annoucement] New Committer -- Iblis Lin

2019-01-07 Thread Hagay Lupesko
Congrats Iblis!

On Sat, Jan 5, 2019 at 2:35 PM Steffen Rochel 
wrote:

> Congratulation Ilbis!
>
> On Sat, Jan 5, 2019 at 1:45 PM Lin Yuan  wrote:
>
> > Welcome Iblis,
> >
> > Great to see a good Julia support in MXNet!
> >
> > Lin
> >
> > On Sat, Jan 5, 2019 at 12:32 PM Marco de Abreu 
> > wrote:
> >
> > > Welcome Iblis,
> > >
> > > great to have you on board!
> > >
> > > -Marco
> > >
> > > Am Sa., 5. Jan. 2019, 21:13 hat Carin Meier 
> > > geschrieben:
> > >
> > > > Please join me in welcoming Iblis Lin as a new committer.
> > > >
> > > > He has been a long time contributor to the Julia package, is
> > responsible
> > > > for bringing into the main MXNet repo, and is the current maintainer.
> > > >
> > > > https://github.com/apache/incubator-mxnet/commits?author=iblis17
> > > >
> > > > - Carin Meier
> > > >
> > >
> >
>


Re: [Annoucement] New Committer -- Da Zheng

2018-12-17 Thread Hagay Lupesko
Congrats Da!

On Mon, Dec 17, 2018 at 5:44 PM Marco de Abreu 
wrote:

> Welcome Da, great to have you on board!
>
> Am Di., 18. Dez. 2018, 02:35 hat Lv, Tao A 
> geschrieben:
>
> > Congrats Da! Thank you for the effort on bringing MKL-DNN to MXNet. It's
> > really the footstone for the latter work and improvements.
> >
> > -Original Message-
> > From: Tianqi Chen [mailto:tqc...@apache.org]
> > Sent: Tuesday, December 18, 2018 1:02 AM
> > To: dev@mxnet.incubator.apache.org
> > Subject: [Annoucement] New Committer -- Da Zheng
> >
> > Dear Community:
> >
> > Please join me to welcome Da Zheng as a new committer of the MXNet.
> >
> > Da is the main author of MKL-DNN integration and recently he champions
> the
> > control flow support. He is one of the few "explorer style" contributors
> of
> > the community, who we desperately need in this fast change environment of
> > the deep learning system landscape.
> >
> > PRs https://github.com/apache/incubator-mxnet/commits?author=zheng-da
> > reviews  *
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Azheng-da+
> > <
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Azheng-da+
> > >*
> > dev@  https://lists.apache.org/list.html?d...@mxnet.apache.org:lte=3y:da-
> > zheng
> >
> > Tianqi
> >
>


Re: Cambricon MLU support for MXNet.

2018-12-16 Thread Hagay Lupesko
Welcome to the MXNet community Haochong!
It's exciting to learn about your plans to contribute to MXNet!

I highly recommend that you document your proposal and technical design in
MXNet's design proposals wiki [1
],
where you can go into details and ask for comprehensive feedback from the
community.

Cheers,
Hagay

[1] https://cwiki.apache.org/confluence/display/MXNET/Design+Proposals

On Sun, Dec 16, 2018 at 6:33 PM 张昊翀  wrote:

> Dear MXNet community,
>
> We are from Cambricon, a leading supplier of artificial intelligence
> chips. We have two product lines, including IP products (e.g., Cambricon
> 1A/1H) and chip products (e.g., MLU100 released in May 2018)
>
> We are now adapting MXNet on Cambricon products. During the follow-up
> session, we plan to open source, and hope to merge these new features into
> the master branch of MXNet and to be a part of MXNet's long-term support.
> We firmly believe that these MLU features will promote the MXNet community
> development.
> To this end, we are ready to accept the rigorous inspection of MXNet
> community. In addition, we need advice from the community to achieve high
> quality implementation. On this basis, we very much hope to reach a
> full-scale long-term cooperation with the community.
>
> In order to achieve the above goals, we hope to keep in touch with the
> community on some issues. Looking forward to your valuable feedback.
>
> 1. MLU100 mainly focuses on inference, and we plan to first support the
> inference part of MXNet. The training part of MXNet on MLU will be released
> in the future. Is that acceptable for MXNet community?
>
> 2. Though MLU can support various operators/networks, to guarantee high
> quality, all supported operators submitted to the community should undergo
> rigorous stress test. Thus, at the beginning, we plan to release a small
> number of supported operators and networks, and more of them will be
> continuously added. Is that acceptable or do we have to support all
> networks in the ModelZoo in the first release?
>
> 3. Currently we plan to support both Python and C++ APIs. More details on
> supported APIs will be provided in a follow-up proposal.
>
> 4. We need to modify the mShadow in order to support tensor memory
> operations.
>
> 5. In order to enable the community to run and fully test our code, we
> want to provide the community with a complete test environment. At present,
> we are considering the following three ways.
> A) Provides several remote servers for community and integrates with the
> community's Jenkins.
> B) Provide a cloud platform to the community.
> C) Donate MLU100 to the community's testing platform. However, we don’t
> know the specific ways of donation, and we hope to get help. We are
> wondering about how MXNet's test servers are managed.
>
> About more technical details, a proposal will be submitted to the
> community before releasing the code.
>
> In addition to the above points, the remaining questions and suggestions
> are also welcome. Thanks!
>
> More about Cambricon:
> Cambricon is the artificial intelligence computing pioneer that engineers
> and successfully commercializes world’s first dedicated machine learning
> processor. To bring its unique AI processors from edge to cloud, enriching
> and advancing human life, is the firm mission of the company. Dr. Tianshi
> Chen is the founder and CEO of Cambricon, where he brings over 10 years
> experience in the fields of micro-processor architecture and artificial
> intelligence.
> In 2016, Cambricon released Cambricon 1A processor, the first commercial
> machine learning specific processor in the world. Later, during the 3rd
> World Internet Conference, Cambricon 1A processor was elected as one of
> “World Leading Internet Scientific and Technological Achievements“. In May
> 2018, Cambricon released MLU100, a machine learning chip which is in mass
> production now. By offering revolutionary technology and products,
> Cambricon has established and remains active relationships with various
> companies in the AI industry.
>
>
> Regards,
> Haochong Zhang
> Cambricon MXNet Development Team
>
>
>


Re: [Announcement] New Committer -- Rahul Huilgol

2018-12-03 Thread Hagay Lupesko
+1 - congrats Rahul!

On Mon, Dec 3, 2018 at 8:09 PM kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Congrats Rahul, well deserved.
>
> On Mon, Dec 3, 2018 at 6:24 PM Tianqi Chen  wrote:
>
> > Let us welcome Rahul Huilgol as a new Committer of MXNet. He has
> > contributed to many fronts, including the FP16 support, distributed
> > training and mixed precision support of MXNet. He has a breadth of
> > knowledge across multiple modules of the system and would be valuable
> > member of the committer team
> >
> > PRs https://github.com/apache/incubator-mxnet/commits?author=rahul003
> > Reviews
> >
> >
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Arahul003
> > dev@
> > https://lists.apache.org/list.html?d...@mxnet.apache.org:lte=3y:rahul003
> >
> >
> > Tianqi
> >
>


Re: [Announcement] New Committer -- Aaron Markham

2018-12-03 Thread Hagay Lupesko
Congrats Aaron!
Your work on the docs definitely set a new standard and helps the community
tremendously - well deserved!


On Mon, Dec 3, 2018 at 6:22 PM Tianqi Chen  wrote:

> Let us welcome Aron Markham as a new committer of MXNet. Aaron has been
> actively working on improving documents and website of MXNet
> PRs  *
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Aaaronmarkham
> <
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Aaaronmarkham
> >*
> reviews  *
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Aaaronmarkham+
> <
> https://github.com/apache/incubator-mxnet/pulls?utf8=%E2%9C%93=is%3Apr+reviewed-by%3Aaaronmarkham+
> >*
> dev@  https://lists.apache.org/list.html?d...@mxnet.apache.org:lte=3y:
> 
> *aaronmarkham*
>
> Tianqi
>


Re: CI impaired

2018-12-02 Thread Hagay Lupesko
Thanks for the update Marco and all the hard work put into the CI!

On Sat, Dec 1, 2018 at 1:21 PM Marco de Abreu
 wrote:

> Hello everyone,
>
> the move has just been completed and the old big pipeline as well as the
> according job have been disabled. From now on, you will see the details
> status messages below your PRs.
>
> Some people wanted to make modifications to the Jenkinsfiles recently. In
> that case, your PR will show a merge conflict. The new Jenkinsfiles are
> available at [1].
>
> Yesterday, I have indexed all PRs with our CI system to make sure that each
> one gets properly validated and our merge processes don't get impaired.
> Everything looks good so far, but due to the flakyness of our tests, it's
> quite unlikely that every single tests has passed. If your particular PR
> shows a failure for a certain test, please follow the same procedure as
> usual and retrigger it by pushing another commit. From now on, you can also
> trigger partial runs of the CI. For this, just hit up a committer and they
> will be happy to trigger that specific job on your behalf.
>
> If somebody in the community is interested, we would also be happy to
> collaborate on a bot that allows to control CI runs like retriggering
> certain jobs, requesting additional non-PR jobs to run - e.g. when you made
> changes to nightly, etc.
>
> Thanks everybody for being patient and so collaborative during this
> transisition time. I'm looking forward to everybodys contributions.
>
> Best regards,
> Marco
>
> [1]: https://github.com/apache/incubator-mxnet/tree/master/ci/jenkins
>
> On Sat, Dec 1, 2018 at 4:27 AM Marco de Abreu <
> marco.g.ab...@googlemail.com>
> wrote:
>
> > Thanks Naveen and Gavin!
> >
> > #1 has been completed and every job has finished its processing.
> >
> > #2 is the ticket with infra:
> > https://issues.apache.org/jira/browse/INFRA-17346
> >
> > I'm now waiting for their response.
> >
> > -Marco
> >
> > On Fri, Nov 30, 2018 at 8:25 PM Naveen Swamy  wrote:
> >
> >> Hi Marco/Gavin,
> >>
> >> Thanks for the clarification. I was not aware that it has been tested
> on a
> >> separate test environment(this is what I was suggesting and make the
> >> changes in a more controlled manner), last time the change was made,
> many
> >> PRs were left dangling and developers had to go trigger and I triggered
> >> them at least 5 times before it succeeded today.
> >>
> >> Appreciate all the hard work to make CI better.
> >>
> >> -Naveen
> >>
> >> On Fri, Nov 30, 2018 at 8:50 AM Gavin M. Bell  >
> >> wrote:
> >>
> >> > Hey Folks,
> >> >
> >> > Marco has been running this change in dev, with flying colors, for
> some
> >> > time. This is not an experiment but a roll out that was announced.  We
> >> also
> >> > decided to make this change post the release cut so limit the blast
> >> radius
> >> > from any critical obligations to the community.  Marco is accountable
> >> for
> >> > this work and will address any issues that may occur as he has been
> put
> >> > on-call.  We have, to our best ability, mitigated as much risk as
> >> possible
> >> > and now it is time to pull the trigger.  The community will enjoy a
> bit
> >> > more visibility and clarity into the test process which will be
> >> > advantageous, as well as allowing us to extend our infrastructure in a
> >> way
> >> > that affords us more flexibility.
> >> >
> >> > No pending PRs will be impacted.
> >> >
> >> > Thank you for your support as we evolve this system to better serve
> the
> >> > community.
> >> >
> >> > -Gavin
> >> >
> >> > On Fri, Nov 30, 2018 at 5:23 PM Marco de Abreu
> >> >  wrote:
> >> >
> >> > > Hello Naveen, this is not an experiment. Everything has been tested
> in
> >> > our
> >> > > test system and is considered working 100%. This is not a test but
> >> > actually
> >> > > the move into production - the merge into master happened a week
> ago.
> >> We
> >> > > now just have to put all PRs into the catalogue, which means that
> all
> >> PRs
> >> > > have to be analyzed with the new pipelines - the only thing that
> will
> >> be
> >> > > noticeable is that the CI is under higher load.
> >> > >
> >> > > The pending PRs will not be impacted. The existing pipeline is still
> >> > > running in parallel and everything will behave as before.
> >> > >
> >> > > -Marco
> >> > >
> >> > > On Fri, Nov 30, 2018 at 4:41 PM Naveen Swamy 
> >> wrote:
> >> > >
> >> > > > Marco, run your experiments on a branch - set up, test it well and
> >> then
> >> > > > bring it to the master.
> >> > > >
> >> > > > > On Nov 30, 2018, at 6:53 AM, Marco de Abreu <
> >> > > > marco.g.ab...@googlemail.com.INVALID> wrote:
> >> > > > >
> >> > > > > Hello,
> >> > > > >
> >> > > > > I'm now moving forward with #1. I will try to get to #3 as soon
> as
> >> > > > possible
> >> > > > > to reduce parallel jobs in our CI. You might notice some
> >> unfinished
> >> > > > jobs. I
> >> > > > > will let you know as soon as this process has been completed.
> >> Until
> >> > > then,
> >> 

Re: Apache Infra tickets for MXNet

2018-12-02 Thread Hagay Lupesko
LGTM.
Thanks Marco for clarifying and documenting this - very helpful!

Hagay

On Sun, Dec 2, 2018 at 6:19 PM Marco de Abreu
 wrote:

> Hello Steffen,
>
> great suggestion! I have added a section to the wiki page.
>
> Best regards,
> Marco
>
> On Sat, Dec 1, 2018 at 10:49 PM Steffen Rochel 
> wrote:
>
> > LGTM
> > One suggestion - add a section how to handle security or other sensitive
> > and time critical issues. I assume that an email should be sent to PPMC
> to
> > private list to raise such issue and PPMC will take appropriate action.
> > Steffen
> >
> > On Sat, Dec 1, 2018 at 1:40 PM Marco de Abreu
> >  wrote:
> >
> > > - Resending since I sent it to the wrong list -
> > >
> > > Thank you Steffen and Michael!
> > >
> > > I have created a new page at
> > >
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Contacting+Apache+Infrastructure
> > > .
> > > I'd appreciate a review.
> > >
> > > Best regards,
> > > Marco
> > >
> > > On Sat, Dec 1, 2018 at 8:54 PM Michael Wall  wrote:
> > >
> > > > LGTM, thanks again for taking care of this for the project
> > > >
> > > > Mike
> > > >
> > > > On Sat, Dec 1, 2018 at 1:10 PM Marco de Abreu <
> > > > marco.g.ab...@googlemail.com> wrote:
> > > >
> > > >> Thank you Steffen and Michael!
> > > >>
> > > >> I have created a new page at
> > > >>
> > >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Contacting+Apache+Infrastructure
> > > .
> > > >> I'd appreciate a review.
> > > >>
> > > >> Best regards,
> > > >> Marco
> > > >>
> > > >> On Sat, Dec 1, 2018 at 4:23 PM Michael Wall 
> > wrote:
> > > >>
> > > >>> Thanks for working through that with infra Marco.  I think the
> > process
> > > >>> you outlined is good and being able to submit tickets without
> mentor
> > > >>> approval is good for the project.
> > > >>>
> > > >>> Whoever puts the process on the wiki please reply with a link.
> > > >>>
> > > >>> Mike
> > > >>>
> > > >>> On Thu, Nov 29, 2018 at 11:45 PM Steffen Rochel <
> > > steffenroc...@gmail.com>
> > > >>> wrote:
> > > >>>
> > >  Thanks Marco. Cwiki seems a good place to document the policy.
> > >  Steffen
> > > 
> > >  On Thu, Nov 29, 2018 at 8:06 PM Marco de Abreu
> > >   wrote:
> > > 
> > >  > Hello everyone,
> > >  >
> > >  > I have just had a nice conversation with Greg Stein, VP of
> Apache
> > >  Infra,
> > >  > about the topic of creating tickets against Apache Infra.
> > >  >
> > >  > In the past, we had the restriction that only IPMC members
> (speak,
> > >  mentors)
> > >  > were allowed to file tickets against Apache Infra. This was due
> > past
> > >  issues
> > >  > where tickets have been created without previous discussions on
> > dev@
> > >  and
> > >  > from people who were not PPMC members, thus creating too much
> > churn.
> > >  >
> > >  > During the last year, the MXNet community has shown that we are
> > able
> > >  to
> > >  > adhere to the Apache ways. Thus the restrictions are being
> lifted
> > > and
> > >  the
> > >  > following policy get set in place:
> > >  >
> > >  > - Only PPMC members are allowed to create tickets (if you can
> see
> > >  > priv...@mxnet.apache.org, you're good to go)
> > >  > - Committers are not allowed to create tickets (if you have
> write
> > >  access to
> > >  > GitHub but can't see priv...@mxnet.apache.org, you're not a
> PPMC
> > >  member
> > >  > but
> > >  > a committer)
> > >  > - Contributors are not allowed to create tickets (if you're
> > neither
> > > a
> > >  PPMC
> > >  > member, nor a committer, then you're a contributor)
> > >  > - There always has to be a dev@ thread before a ticket can be
> > >  created.
> > >  > That
> > >  > thread has to be linked in that said ticket.
> > >  > - Always search for a solution yourself (self-service) before
> > >  engaging with
> > >  > Apache Infra.
> > >  >
> > >  > I'm not sure about a good place to document these guidelines. If
> > >  somebody
> > >  > has a good idea where we should write them down, please feel
> free
> > to
> > >  drop
> > >  > me a link and I'll paste them in there.
> > >  >
> > >  > Thanks everybody for the great collaboration around Apache Infra
> > >  tickets!
> > >  > This was a prime example of a community working together.
> > >  >
> > >  > Best regards,
> > >  > Marco
> > >  >
> > > 
> > > >>>
> > >
> >
>


Re: Include MKLDNN into default mxnet pip package

2018-11-28 Thread Hagay Lupesko
n Wed, Nov 21, 2018 at 6:04 PM Lv, Tao A 
> > wrote:
> > > >>>>
> > > >>>> Here are my answers for the questions from Kellen and Naveen
> > > >>>> about MKL-DNN. It doesn't mean that I'm supportive for making
> > > >>>> MKL-DNN default here.
> > > >>>>
> > > >>>> @Kellen,
> > > >>>>
> > > >>>> FYI, here is a list for those platforms which are officially
> > > >>>> supported by MKL-DNN.
> > > >>>> https://github.com/intel/mkl-dnn#system-requirements
> > > >>>>
> > > >>>> Most of computation intensive kernels in MKL-DNN are JITed. So
> > > >>>> they are supposed to generate code according to the platform
> > > >>>> during runtime. For non-JIT code in MKL-DNN, same as other code
> > > >>>> in MXNet, it will generate instructions according to the
> > > >>>> options/flags of compiler. We can set -DARCH_OPT_FLAGS when
> > > >>>> build MKL-DNN to avoid optimization for compiling machine.
> > > >>>> That's exactly what we are doing
> > > >> for MKL-DNN build in MXNet.
> > > >>> Even
> > > >>>> without MKL-DNN, I noticed there were issues about illegal
> > > >>>> instructions
> > > >>> of
> > > >>>> MXNet when users import the pip package on a lower end machine
> > > >>>> which probably only supports SSE.
> > > >>>>
> > > >>>> @Naveen,
> > > >>>>
> > > >>>> The LSTM issue has already been identified as a regression from
> > > >>>> the
> > > >>> recent
> > > >>>> version of MKL-DNN. Hopefully it will be fixed soon with a new
> > > >>>> update of MKL-DNN.
> > > >>>>
> > > >>>> MXNet has many submodule dependencies under the 3rd party folder.
> > > >>>> Seems
> > > >>> we
> > > >>>> don't require release versions for most of these dependencies.
> > > >>>> The
> > > >>> release
> > > >>>> period of MKL-DNN and MXNet are not matched very well. I think
> > > >>>> it would
> > > >>> be
> > > >>>> a risk for MXNet release if it hardly depends on the release of
> > > >>>> a submodule, no need to say depends on the releases of all
> submodules.
> > > >>>>
> > > >>>> -tao
> > > >>>>
> > > >>>> -Original Message-
> > > >>>> From: Naveen Swamy [mailto:mnnav...@gmail.com]
> > > >>>> Sent: Thursday, November 22, 2018 9:08 AM
> > > >>>> To: dev@mxnet.incubator.apache.org
> > > >>>> Cc: d...@mxnet.apache.org
> > > >>>> Subject: Re: Include MKLDNN into default mxnet pip package
> > > >>>>
> > > >>>> Hi Alex,
> > > >>>>
> > > >>>> Thanks for promptly running the numbers on AMD and reporting here.
> > > >>>>
> > > >>>> Can you please update the AMD numbers here for posterity
> > > >>>>
> > > >>> https://cwiki.apache.org/confluence/display/MXNET/MXNet+with+Int
> > > >>> el
> > > >>> +MKL
> > > >>> -DNN+-+Performance+Benchmarking
> > > >>>> ?
> > > >>>>
> > > >>>> are there any outstanding issues when MKLDNN is enabled? from
> > > >>>> my offline conversation I am briefly aware performance issues
> > > >>>> with LSTM, is there an GitHub issue for it?
> > > >>>>
> > > >>>> MKLDNN is a submodule dependency, are we pulling the latest
> > > >>>> commit or releases  ? If not we should move to releases before
> > > >>>> we make it a
> > > >>> default.
> > > >>>> Ideally we should use platform specific distributions (-dev
> > > >>>> packages) at least we should rely on well tested releases.
> > > >>>>
> > > >>>>
> > > >>>> Thanks, Naveen
> > > >>>>
> > > >>>> On Wed, Nov 21, 

Re: Updating MXNet's Cub

2018-11-24 Thread Hagay Lupesko
Thanks for getting this done Frank!

On Fri, Nov 23, 2018 at 11:28 PM frankfliu2...@gmail.com <
frankfliu2...@gmail.com> wrote:

> I created a PR to address this issue:
> https://github.com/apache/incubator-mxnet/pull/13322
>
> Simply change cub submodule's URL will impact every developer. The
> recommended command: "git submodule update" won't work. Developer has to
> run "git submodule sync" first.
>
> To minimize the impact, I deleted CUB submodule and added a new submodule:
> "nvidia_cub". the only side effect is there will be a dangling untracked
> folder "cub" in local disk. Developer can delete it manually.
>
> Thanks,
> Frank
>
>
> On 2018/08/24 17:00:54, Hagay Lupesko  wrote:
> > Hi all,
> >
> >
> > One of MXNet’s submodule dependencies is a snapshot of Nvidia Cub (
> > https://github.com/dmlc/cub) – the snapshot is of an older version of
> Cub
> > (1.7), while the latest Nvidia Cub release is 1.8.  Note that dmlc/cub
> has
> > no customizations of the source Cub repo.
> >
> >
> > I’d like to suggest to update the existing Cub submodule to Nvidia’s Cub
> > repo. Instead of the snapshot, MXNet will be using Nvidia’s repo and the
> > latest release (both repos have the same BSD-3 license, so licensing
> should
> > not be an issue).
> >
> >
> > Wanted to get feedback from the community to make sure I'm not missing
> > anything.
> >
> > if there are no objections I'll submit a PR for the change.
> >
> >
> > Cheers,
> >
> > Hagay
> >
>


Re: Requesting Slack Access

2018-11-24 Thread Hagay Lupesko
Invite sent.
Welcome Sam!

On Fri, Nov 23, 2018 at 11:38 PM Sam Bean  wrote:

> --
> Sam Bean
> *StockX*
> *Tech Lead, Machine Learning and Personalization*
> *––*
> samb...@stockx.com
> stockx.com
>


Re: [ANNOUNCEMENT] New Committer: Qing Lan

2018-11-20 Thread Hagay Lupesko
Congrats Qing! Awesome to see you become a committer!

On Tue, Nov 20, 2018 at 4:26 PM Marco de Abreu
 wrote:

> Great to have your on board, Qing!
>
> Am Mi., 21. Nov. 2018, 01:24 hat Naveen Swamy 
> geschrieben:
>
> > The Project Podling Management Committee (PPMC) for Apache MXNet has
> > invited Qing Lan based on his contribution to MXNet Scala to become a
> > committer and we are pleased to announce that he has accepted.
> >
> > Qing, thanks a lot for your contribution and continued effort to support
> > MXNet community.
> >
> > Please join me in welcoming Qing to the project!
> >
> > Thanks, Naveen
> > (on behalf of Apache MXNet PPMC)
> >
>


Re: [RESULTS] [VOTE] Release Apache MXNet (incubating) version 1.3.1.rc0

2018-11-20 Thread Hagay Lupesko
Great - congrats!

On Tue, Nov 20, 2018 at 8:51 AM Anton Chernov  wrote:

> Dear MXNet community,
>
> I'm happy to announce the results of the vote.
>
> This vote passes with 8 +1 votes (4 binding) and no 0 or -1 votes.
>
> +1 votes
>
> * Carin / binding
> * Indhu / binding
> * Sandeep / binding
> * Jim / binding
> * Kellen
> * Steffen
> * Roshani
> * Aaron
>
> 0 votes
> * No votes
>
> -1 votes
> * No votes
>
> Vote thread can be found here [1]. The list of members can be found here
> [2].
>
> I'll continue with the release process and the release announcement will
> follow in the next few days.
>
>
> Best
> Anton
>
> [1]
>
> https://lists.apache.org/thread.html/32ab13b6d2d80fd75dbc2ec62151d12d09f6e0ca89799ae0aa26894b@%3Cdev.mxnet.apache.org%3E
> [2] http://incubator.apache.org/projects/mxnet.html
>


Re: [Announce] Upcoming Apache MXNet (incubating) 1.4.0 release

2018-11-19 Thread Hagay Lupesko
+1 to wait until Java API work is ready since it is a major feature of the
release, yet performance should be at least on par with Python.

Also, I consider the MKL-DNN feature to be another major feature of the
release, the performance boost on CPU is significant [1], as an example,
ResNet50-v1 is 15.9x faster on C5.18xlarge.
I spoke with Alex Zai and Manu Seth who are working on MKL-DNN issues and
test coverage, and they feel they can get all remaining open issues in for
this Friday - I propose we also wait for that work to be ready and included
in 1.4.0

Cheers,
Hagay

[1]
https://cwiki.apache.org/confluence/display/MXNET/MXNet+with+Intel+MKL-DNN+-+Performance+Benchmarking


On Mon, Nov 19, 2018 at 10:57 AM Steffen Rochel 
wrote:

> On Friday the contributors working on Java API discovered a potential
> performance problem with inference using Java API vs. Python. Investigation
> is ongoing.
> As the Java API is one of the main features for the upcoming release, I
> suggest to post-pone the code freeze towards end of this week.
>
> Please provide feedback and concern about the change in dates for code
> freeze and 1.4.0 release. I will provide updates on progress resolving the
> potential performance problem.
>
> Patrick - do you think it is possible to resolve the remaining issues on
> MKL-DNN this week, so we can consider GA for MKL-DNN with 1.4.0?
>
> Regards,
> Steffen
>
> On Thu, Nov 15, 2018 at 5:26 AM Anton Chernov  wrote:
>
> > I'd like to remind everyone that 'code freeze' would mean cutting a
> v1.4.x
> > release branch and all following fixes would need to be backported.
> > Development on master can be continued as usual.
> >
> > Best
> > Anton
> >
> > ср, 14 нояб. 2018 г. в 6:04, Steffen Rochel :
> >
> > > Dear MXNet community,
> > > the agreed plan was to establish code freeze for 1.4.0 release today.
> As
> > > the 1.3.1 patch release is still ongoing I suggest to post-pone the
> code
> > > freeze to Friday 16th November 2018.
> > >
> > > Sergey Kolychev has agreed to act as co-release manager for all tasks
> > which
> > > require committer privileges. If anybody is interested to volunteer as
> > > release manager - now is the time to speak up. Otherwise I will manage
> > the
> > > release.
> > >
> > > Regards,
> > > Steffen
> > >
> >
>


Re: LabelBot New Design in Production

2018-11-08 Thread Hagay Lupesko
Thanks for this useful contribution Harsh!

+1 to an updated issue template
and +1 to Marco's idea as well
Anything that helps the community triage and make it easier for folks that
file issues is greatly appreciated

Hagay

On Thu, Nov 8, 2018 at 4:05 PM Marco de Abreu
 wrote:

> Great job, Harsh!
>
> That's a very good idea, Naveen. Harsh, Qing and I have been thinking about
> the bot "welcoming" the user when they create an issue or pull request by
> creating a comment as soon as the thread gets created. This message could
> contain basic instructions like these commands, recommendations that are
> dependent on the users requests and other dynamic content that we could
> improve over time (think about it recommending you to check out the discuss
> forum when you ask a question, asking you to provide a minimum reproducible
> example if you report a bug, etc). That way, we would reduce the amount
> boilerplate in the issue template and at the same time provide the user
> with custom tailored assistance.
>
> Best regards,
> Marco
>
> On Fri, Nov 9, 2018 at 1:00 AM Naveen Swamy  wrote:
>
> > Great job!, this is very helpful to triage issues!, users when creating a
> > new Issue could themselves tag the issues. May be we should add that to
> the
> > issue template?
> >
> > On Thu, Nov 8, 2018 at 3:54 PM Harsh Patel 
> > wrote:
> >
> > > Hey all,
> > > The upgraded label bot has been pushed into production. Current
> > > functionality includes
> > > add, delete, and update.
> > > (i.e. @mxnet-label-bot add ['label']
> > > @mxnet-label-bot remove ['label']
> > > @mxnet-label-bot update ['label'])
> > >
> > > Users should feel free to leave suggestions and any potential issues.
> The
> > > forum to this best would be here:
> > > https://github.com/apache/incubator-mxnet/issues/13163
> > >
> > > Best,
> > > -Harsh Patel
> > >
> >
>


Re: MKLDNN dynamically linked

2018-11-08 Thread Hagay Lupesko
+1
Kellen made a good call about watching out for the license. Not an issue
for MKL-DNN though, which I believe has an Apache 2 license.

On Thu, Nov 8, 2018 at 3:51 PM Zhao, Patric  wrote:

> +1 for static link :)
>
> Feel free to let us know if anything we can help.
>
> > -Original Message-
> > From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> > Sent: Friday, November 9, 2018 7:30 AM
> > To: dev@mxnet.incubator.apache.org
> > Cc: d...@mxnet.apache.org
> > Subject: Re: MKLDNN dynamically linked
> >
> > I think we should bias towards static linking.  It should make using
> mxnet
> > easier in a lot of cases for users.  As long as the license permits
> static linking
> > (i.e. is non-gpl) I'd +1 static linking for portability and ease of
> use.  The only
> > caveat would be in cases where the package size would cause grief for
> PyPi
> > maintainers.
> >
> > On Thu, Nov 8, 2018, 3:54 PM Sheng Zha  >
> > > +1. Ideally, MKLDNN can be statically linked. mxnet-mkl relies on Make
> > > +for
> > > building it so help is wanted on mxnet.
> > >
> > > -sz
> > >
> > > On 2018/11/08 21:28:50, Alex Zai  wrote:
> > > > Currently in mxnet-mkl the libmxnet.so is dynamically linked to to
> > > > libmkldnn.so.0. This is known to cause some issues if the wrong
> > > > version
> > > of
> > > > mkldnn is linked. Can we static link this file instead?
> > > >
> > > > Alex
> > > >
> > >
>


Re: Requesting access for SLACK

2018-11-08 Thread Hagay Lupesko
Invite sent Gaurav!

On Wed, Nov 7, 2018 at 1:33 PM Gaurav Gireesh 
wrote:

> Hi!
> I would like to request access to the Slack channel for MXNet.
>
> Thanks and regards,
> Gaurav Gireesh
>


Re: Include MKLDNN into default mxnet pip package

2018-10-19 Thread Hagay Lupesko
Awesome collaborative effort across many contributors and companies!

The boost is impressive and for MXNet users to get this boost "out of the
box" is a great benefit and makes MXNet an even better choice.

Alex - can you clarify whether there are any down sides with regards to
noon AVX-512 architectures, AMD CPUs, etc? Will it gracefully fallback?

Hagay


On Fri, Oct 19, 2018, 15:46 Sergio Fernández  wrote:

> If there is no downside on platforms not supporting AVX512 instructions,
> then +1
>
>
> On Wed, Oct 17, 2018, 14:10 Alex Zai  wrote:
>
> > Hey all,
> > We have been working hard these past few months to integrate and
> stabilize
> > Intel’s MKLDNN deep learning CPU accelerator into Mxnet and have made
> > incredible progress. On CPUs with AVX512 instructions (such as c5.18x) we
> > have seen performance increase up to 12x and on other platforms (Macs,
> > AVX2) we seen a speedup of 1.5+. Full list of benchmarks can be found
> here
> > (
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=95650764
> >  and https://github.com/apache/incubator-mxnet/pull/12591).
> >
> > Currently, using this accelerator requires the developer to either pip
> > install the mxnet-mkl version of mxnet or to build it themselves from
> > source. Given that we should try to provide the best performance "out of
> > the box” with mxnet we should include this in the default build. The
> mkldnn
> > library is included with in the pip package build so it does not require
> an
> > external dependency.
> >
> > There were concerns that MKLDNN could cause regressions on certain
> > platforms (as it did with the tensorflow version a while back); but we
> > added a env flag (MXNET_MKLDNN_ENABLED) that allows users to turn of this
> > feature during runtime. Please bring up any other concerns you may have
> and
> > your thoughts on including this accelerator in the default build.
> >
> > Best,
> > Alex
> >
>


Re: [LAZY VOTE]: rename dockerfiles s/.build.//

2018-10-17 Thread Hagay Lupesko
The PR provides a good explanation of this change and all code updates.
LGTM.

On Tue, Oct 16, 2018 at 8:41 AM Pedro Larroy 
wrote:

> Hi
>
> I would like to rename the dockerfiles since they are used as a runtime
> environment and not only as build as they were initially intended.
>
> More info about the change in this PR:
> https://github.com/apache/incubator-mxnet/pull/12423/files
>
>
> Pedro.
>


Re: Request to join the Slack channel

2018-09-26 Thread Hagay Lupesko
Invite sent!

On Tue, Sep 25, 2018 at 10:29 PM Vikas Kumar  wrote:

> Please send me an invite to join.
>
> Thanks
> Vikas
>


Re: multiple installation guides?

2018-09-18 Thread Hagay Lupesko
The /test site seems to be something old that should have been removed a
long time ago, it lists versions 0.10 and 0.10.14 :)
Maybe Aaron has an idea what needs to be done to remove it...

On Fri, Sep 14, 2018 at 4:55 PM Alex Zai  wrote:

> Why do we have two sets of installation guides?
>
> http://mxnet.incubator.apache.org/test/get_started/install.html
>
> https://mxnet.incubator.apache.org/install/index.html?platform=Linux=Python=CPU
>
> The /test domain is also not secure. If this is not suppose to be
> public we should remove this as it is confusing.
>


Re: Questions regarding C Predict and C++ API provided by MxNet.

2018-09-18 Thread Hagay Lupesko
Amol,

I can try and provide my 2 cents on some of these questions:
- "What are the typical uses cases in which C++ (cpp-package) or C (C
Predict) APIs are used? For example: inference, training or both."
Note that the C API supports inference only.
>From my experience as an Amazon Web Services employee, teams/customers who
used the C API used it mainly for inference. Python is much more convenient
and suitable for rapid experimentation that is important for building and
training models.

- "Currently, users are required to build these APIs from source.
Would it be helpful if these APIs are available as standalone packages
distributed via package managers (example: apt-get)?"
I think it will reduce friction significantly if MXNet offers pre-build
binaries. MXNet build takes a while to build and to figure out, there's
quite a few build flag options, which may be intimidating for users,
especially new users.
Package managers will be great, but even just binary libraries available on
a shared location (e.g. S3) would be super useful.

HTH,
Hagay


On Mon, Sep 17, 2018 at 3:23 PM Amol Lele  wrote:

> Hello everybody,
>
>
>
> As contributor to Apache MXNet project I would like to ask community a
> couple of questions in regards to C Predict and C++ APIs that MXNet
> provides to its users. My main goal is to better understand the pain points
> community members currently see/have with those APIs as well as to what
> contributions to C++ and C Predict APIs would be most beneficial to users
> who are using/tried to use these APIs of Apache MXNet.
>
> 1.   What are the typical uses cases in which C++ (cpp-package) or C (C
> Predict) APIs are used? For example: inference, training or both.
>
> 2.   Which set of APIs out of C++ and C do users prefer? Preferably
> with reasons why.
>
> 3.   What are the frequently used platforms (Linux, Mac, Windows, etc)
> and configurations (such as CPU, GPU, etc) on which these APIs are used?
>
> 4.   Currently, users are required to build these APIs from source.
> Would it be helpful if these APIs are available as standalone packages
> distributed via package managers (example: apt-get)?
>
> I would highly appreciate your replies to any or all of the above
> questions.
>
>
>
> Thanks,
>
> -Amol
>


Re: [DISCUSS] Build OSX builds in CI (possibly with TravisCI).

2018-09-18 Thread Hagay Lupesko
Bravo indeed!
Awesome work Kellen and Marco!

On Tue, Sep 18, 2018 at 7:56 PM Lin Yuan  wrote:

> Bravo! This is a very important piece in CI. Thanks Kellen and Marco to
> implement it quickly.
>
>
> Lin
>
> On Tue, Sep 18, 2018, 4:18 PM Marco de Abreu
>  wrote:
>
> > Kellen has fixed the one bug in our build system and thus, there are no
> > outstanding tests :)
> >
> > Exactly, it will run on branch and PR validation.
> >
> > Best regards,
> > Marco
> >
> > sandeep krishnamurthy  schrieb am Di., 18.
> > Sep. 2018, 19:32:
> >
> > > This is awesome. Thanks a lot Kellen and Marco. With this work
> complete,
> > we
> > > will have MXNet Python tests running for Mac on Travis CI, for PR and
> > > Branch builds?
> > > Thank you for working on fixing the tests and making it run as part of
> > > Travis CI for Mac platform. Is there any Github issue or Jira where we
> > can
> > > see disabled / tests that needs to be fixed for Mac? This might be
> useful
> > > if we can call for contributions.
> > >
> > > Best,
> > > Sandeep
> > >
> > >
> > > On Tue, Sep 18, 2018 at 9:51 AM Marco de Abreu
> > >  wrote:
> > >
> > > > Hey everyone,
> > > >
> > > > we are about to enable Python tests for Mac. The outstanding bugs
> have
> > > been
> > > > fixed by Kellen and we're just waiting for the PRs to pass. We'll
> send
> > a
> > > > separate email as soon as they are enabled.
> > > >
> > > > Additionally, we had a small problem that Travis runs got aborted if
> > > > multiple commits were done in a short timeframe. While this is
> > acceptable
> > > > for PRs, this causes our branch jobs to also fail. An examples is
> > > available
> > > > at [1]. In order to cope with this, I have asked Apache Infra to
> > disable
> > > > cancellation of concurrent jobs. They agreed to this, but reminded us
> > > that
> > > > they might turn it back on if we consume too many resources.
> > > >
> > > > The dashboard to review the Travis resource utilization is available
> at
> > > > [2]. Just log in as Guest.
> > > >
> > > > Best regards,
> > > > Marco
> > > >
> > > > [1]:
> > > >
> > > >
> > >
> >
> https://travis-ci.org/apache/incubator-mxnet/builds/430135867?utm_source=github_status_medium=notification
> > > > [2]:
> > > >
> > > >
> > >
> >
> https://demo.kibble.apache.org/dashboard.html?page=ci=e0ce4eee89a77ec231eee1fdbbc647cb3de2f6ecfc3cef8d8c11dc2d=hour
> > > >
> > > >
> > > > On Thu, Sep 13, 2018 at 1:06 AM kellen sunderland <
> > > > kellen.sunderl...@gmail.com> wrote:
> > > >
> > > > > We've got fairly limited ability to change what's reported by
> Travis.
> > > > Most
> > > > > administration is done by the ASF Infra crew, so it's tough for us
> to
> > > > > experiment with settings.  It'd be great if you could bear with us
> > for
> > > a
> > > > > few days.  It shouldn't take too long to either (1) get
> happy-feeling
> > > > green
> > > > > checks back, or (2) decide we don't care as much as we thought we
> did
> > > > about
> > > > > MacOS support.
> > > > >
> > > > > On Wed, Sep 12, 2018 at 9:53 PM Aaron Markham <
> > > aaron.s.mark...@gmail.com
> > > > >
> > > > > wrote:
> > > > >
> > > > > > Is there any way to make it not show a red X failure in the
> GitHub
> > UI
> > > > > when
> > > > > > TravisCI fails? I keep going back to check what flakey test
> failed
> > > this
> > > > > > time and realizing that Jenkins is still running and it was the
> > "not
> > > > > > required" Travis fail. The green checkmark makes me happy and
> it's
> > > > easier
> > > > > > to keep an eye on what's going on. If Travis times out a lot of
> the
> > > > time,
> > > > > > then most of our PRs will look red/bad/sad when they're not.
> > > > > >
> > > > > > What about no failure flag set, but add a label that Travis
> > > failed
> > > > or
> > > > > > if we can't control the flag, auto-set labels for each Travis and
> > > > Jenkins
> > > > > > pass/fail so we still get the benefit of at-a-glance status
> checks.
> > > > > >
> > > > > > On Wed, Sep 12, 2018 at 6:04 AM Marco de Abreu
> > > > > >  wrote:
> > > > > >
> > > > > > > Hello,
> > > > > > >
> > > > > > > Travis CI has successfully been enabled just now. This means
> you
> > > will
> > > > > now
> > > > > > > see a new status under your PR which is called
> > > > > > > "continuous-integration/travis-ci/pr".
> > > > > > >
> > > > > > > The job only compiles MXNet on Mac and currently does not run
> > unit
> > > > > tests
> > > > > > -
> > > > > > > we expect the overall execution duration to be around 6 minutes
> > and
> > > > > thus
> > > > > > > faster than the full Jenkins pipeline. The status is set to
> "not
> > > > > > required"
> > > > > > > which means that it does not block merging if that job fails
> > since
> > > > the
> > > > > > > pipeline is still in beta. But in general, it would be good if
> > > > > committers
> > > > > > > review the results in case the job shows a failure. Our last
> > known
> > > > > state
> > > > > > is
> > > > > > > that the pipeline works 

Re: [VOTE] Release MXNet version 1.3.0.RC0

2018-09-04 Thread Hagay Lupesko
Sandeep mentions the issue of an error when user tries to load model params
trained/saved as FP16.
https://github.com/apache/incubator-mxnet/issues/11849
The fix was done by Sandeep:
https://github.com/apache/incubator-mxnet/pull/12412 and is ready to be
cherry picked into the release branch.

This seems like a release blocker to me:
- Basic functionality broken: loading a model (albeit one that that was
saved as non FP32)
- Reported by 3 users (wgchang@, nicklhy@ and ThomasDelteil@)

-1 (non binding)

Hagay



On Tue, Sep 4, 2018 at 12:01 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> "- 0"
>
> I believe the bug #11849
> , unable to import
> non-fp32 models into Gluon, fixed in this PR #12412
>  is important for
> the
> users. I would rather pick this fix in this release than plan a minor
> release later.
>
> Best,
> Sandeep
>
>
>
> On Mon, Sep 3, 2018 at 2:34 PM Philip Cho 
> wrote:
>
> > Actually, the command "git clone --recursive
> > https://github.com/apache/incubator-mxnet -b 1.3.0.rc0" works fine now,
> > never mind.
> >
> > On Mon, Sep 3, 2018 at 1:45 PM Philip Cho 
> > wrote:
> >
> > > Unfortunately, MXNet was depending on a branch of TVM that is now
> > deleted.
> > > We will have to merge #12448
> > >  before the
> > release.
> > >
> > > Background: See dmlc/tvm#1394  >.
> > >
> > > Philip.
> > >
> > > On Mon, Sep 3, 2018 at 7:26 AM Carin Meier 
> wrote:
> > >
> > >> Checked out the tag, built and tested the Clojure package. +1
> > >>
> > >> On Fri, Aug 31, 2018 at 10:59 PM Roshani Nagmote <
> > >> roshaninagmo...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi all,
> > >> >
> > >> > I would like to propose a vote to release Apache MXNet (incubating)
> > >> version
> > >> > 1.3.0.RC0. Voting will start now (Friday, Aug 31st) and end at 7:00
> PM
> > >> > PDT, Wednesday, Sept 5th.
> > >> >
> > >> > Link to release notes:
> > >> > https://github.com/apache/incubator-mxnet/releases
> > >> >
> > >> > Link to release candidate 1.3.0.rc0:
> > >> > *https://github.com/apache/incubator-mxnet/releases/tag/1.3.0.rc
> > >> >  >0*
> > >> >
> > >> > View this page, click on "Build from Source", and use the source
> code
> > >> > obtained from 1.3.0.rc0 tag:
> > >> > https://mxnet.incubator.apache.org/install/index.html
> > >> >
> > >> > Please remember to TEST first before voting accordingly:
> > >> >
> > >> > +1 = approve
> > >> > +0 = no opinion
> > >> > -1 = disapprove (provide reason)
> > >> >
> > >> > Thanks,
> > >> > Roshani
> > >> >
> > >>
> > >
> >
>
>
> --
> Sandeep Krishnamurthy
>


Re: Updating MXNet's Cub

2018-08-28 Thread Hagay Lupesko
Thanks for the feedback Chris. Will follow up.

On Fri, Aug 24, 2018 at 10:53 AM Chris Olivier 
wrote:

> +1 for pointing to NVidia's repo for the newer Cub and subsequent versions.
>
> On Fri, Aug 24, 2018 at 10:01 AM Hagay Lupesko  wrote:
>
> > Hi all,
> >
> >
> > One of MXNet’s submodule dependencies is a snapshot of Nvidia Cub (
> > https://github.com/dmlc/cub) – the snapshot is of an older version of
> Cub
> > (1.7), while the latest Nvidia Cub release is 1.8.  Note that dmlc/cub
> has
> > no customizations of the source Cub repo.
> >
> >
> > I’d like to suggest to update the existing Cub submodule to Nvidia’s Cub
> > repo. Instead of the snapshot, MXNet will be using Nvidia’s repo and the
> > latest release (both repos have the same BSD-3 license, so licensing
> should
> > not be an issue).
> >
> >
> > Wanted to get feedback from the community to make sure I'm not missing
> > anything.
> >
> > if there are no objections I'll submit a PR for the change.
> >
> >
> > Cheers,
> >
> > Hagay
> >
>


Updating MXNet's Cub

2018-08-24 Thread Hagay Lupesko
Hi all,


One of MXNet’s submodule dependencies is a snapshot of Nvidia Cub (
https://github.com/dmlc/cub) – the snapshot is of an older version of Cub
(1.7), while the latest Nvidia Cub release is 1.8.  Note that dmlc/cub has
no customizations of the source Cub repo.


I’d like to suggest to update the existing Cub submodule to Nvidia’s Cub
repo. Instead of the snapshot, MXNet will be using Nvidia’s repo and the
latest release (both repos have the same BSD-3 license, so licensing should
not be an issue).


Wanted to get feedback from the community to make sure I'm not missing
anything.

if there are no objections I'll submit a PR for the change.


Cheers,

Hagay


Re: New committer and PMC member: Carin Meier

2018-08-10 Thread Hagay Lupesko
Congrats Carin!

On Fri, Aug 10, 2018 at 10:23 AM Carin Meier  wrote:

> Thank you. I'm excited to get more involved and help MXNet grow :)
>
> On Fri, Aug 10, 2018 at 12:40 PM Marco de Abreu
>  wrote:
>
> > Dear MXNet community,
> >
> > the Project Management Committee (PMC) for Apache MXNet (incubating)
> > has invited Carin Meier to become a committer and PMC member and we are
> > pleased
> > to announce that she has accepted.
> >
> > Being a committer enables easier contribution to the
> > project since there is no need to go via the patch
> > submission process. This should enable better productivity.
> > Being a PMC member enables assistance with the management
> > and to guide the direction of the project.
> >
> > We all would like to thank Carin for her contributions around the Clojure
> > interface for MXNet. Congratulations and welcome!
> >
> > Best regards,
> > Marco de Abreu
> > - on behalf of the Apache MXNet (incubating) PMC
> >
>


Re: Release plan - MXNET 1.3

2018-08-06 Thread Hagay Lupesko
Some thoughts: why not keep it out of 1.3, and merge it into master so it
can go out with 1.4 instead?
Pros:
- Reduce quality risks for 1.3
- More time to test and get feedback before release
- Avoid further delays in 1.3 release (lots of good stuff there already for
users)
Cons:
- People will need to get master to experiment with TRT (not a major issue
IMO)

Besides, TRT requires a build flag anyway, so MXNet users consuming built
packages (PyPi, Scala) will anyway not be able to try it out unless
building from source...

Thoughts?

On Sun, Aug 5, 2018 at 10:38 PM Steffen Rochel 
wrote:

> Marek, Kellen, Jun, Da, Eric, myself and a few other people discussed
> offline about TensorRT integration PR (
> https://github.com/apache/incubator-mxnet/pull/11325 ). We do agree that
> it
> would be good to include the PR into upcoming 1.3 release, but are all
> concerned about the risk involved and the breaking API change. The
> discussion converged to following proposal. (1) change to contrib PR and
> (2) define a different top level API to indicate that the package is part
> of contrib and experimental (details of API TBD between Marek, Kellen and
> Eric). This change would allow to include TRT integration with v1.3 to
> enable users to try TRT with MXNet, minimize the risk and avoid breaking
> API change.
> To accommodate the change the request is to delay RC for a few days.
>
> Regards,
> Steffen
>
> On Tue, Jul 31, 2018 at 5:08 PM Roshani Nagmote  >
> wrote:
>
> > Hi,
> >
> > I have created a wiki for tracking MXNet 1.3 release with the timeline.
> > Please take a look here:
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Status
> >
> > I am still waiting for following 2 PRs to get merged:
> > TRT integration: https://github.com/apache/incubator-mxnet/pull/11325
> > Gluon RNN: https://github.com/apache/incubator-mxnet/pull/11482
> >
> > *Code freeze date is 08/02(Thursday).* Kindly try to complete ongoing
> work
> > and get these PRs merged.
> >
> > Thanks,
> > Roshani
> >
> >
> >
> > On Mon, Jul 30, 2018 at 1:02 PM Roshani Nagmote <
> roshaninagmo...@gmail.com
> > >
> > wrote:
> >
> > > Hi all,
> > >
> > > Here is an update on MXNet 1.3 release:
> > > I am still waiting for following PRs to get merged:
> > >
> > > TRT integration: https://github.com/apache/incubator-mxnet/pull/11325
> > > Gluon RNN: https://github.com/apache/incubator-mxnet/pull/11482
> > > Scala examples:
> > >
> > > https://github.com/apache/incubator-mxnet/pull/11753
> > >
> > > https://github.com/apache/incubator-mxnet/pull/11621
> > >
> > > *New code freeze date is: 08/03*  Please try to get your ongoing PRs
> > > merged by then.
> > >
> > > @Pedro, I didn't include your PRs in tracking list as you said those
> are
> > > not critical for now. Please let me know if those needs to be included.
> > > https://github.com/apache/incubator-mxnet/pull/11636
> > > https://github.com/apache/incubator-mxnet/pull/11562
> > >
> > > I also have updated project proposal cwiki page to update the status of
> > > PRs.
> > > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> > >
> > >
> > > Please let me know if I am missing something.
> > >
> > > Thanks,
> > > Roshani
> > >
> > >
> > > On Thu, Jul 26, 2018 at 1:34 PM Pedro Larroy <
> > pedro.larroy.li...@gmail.com>
> > > wrote:
> > >
> > >> I would like to get these PR merged:
> > >>
> > >> https://github.com/apache/incubator-mxnet/pull/11636
> > >> https://github.com/apache/incubator-mxnet/pull/11562
> > >>
> > >> How much longer until the code freeze?
> > >>
> > >> On Thu, Jul 26, 2018 at 1:44 AM Roshani Nagmote <
> > >> roshaninagmo...@gmail.com>
> > >> wrote:
> > >>
> > >> > Hi all,
> > >> >
> > >> > PRs waiting to be merged for 1.3 release:
> > >> > https://github.com/apache/incubator-mxnet/pull/11325
> > >> >
> > >> > Are there any other PRs waiting to get merged? Please let me know.
> > >> >
> > >> > Release blocker issue:
> > >> > https://github.com/apache/incubator-mxnet/issues/11853
> > >> >
> > >> > @Marco, @Kellen, Thanks for bringing up the important topic. I agree
> > >> with
> > >> > you and we(internal Amazon team) will be working on fixing the
> > disabled
> > >> > tests.
> > >> > Currently, my colleague, Hao Jin is working on compiling the list of
> > >> > disabled tests and leading the effort to fix them in the next few
> > days.
> > >> >
> > >> > Thanks,
> > >> > Roshani
> > >> >
> > >> > On Mon, Jul 23, 2018 at 6:39 PM kellen sunderland <
> > >> > kellen.sunderl...@gmail.com> wrote:
> > >> >
> > >> > > Thanks again for organizing Roshani.  I believe the TensorRT work
> is
> > >> > ready
> > >> > > for a merge.  Thanks to Marek and all the NVIDIA people for
> > iterating
> > >> on
> > >> > > it.  If possible could a committer review, make sure it meets
> their
> > >> > > expectations and then merge?  PR is here:
> > >> > > https://github.com/apache/incubator-mxnet/pull/11325
> > 

Re: Release blocker: non-determinstic forward in gluon

2018-07-30 Thread Hagay Lupesko
Thanks Pedro.
Good to know you think it is important as well. I hope the community can
review a proposal on the CWiki soon? that would be great...

On Mon, Jul 30, 2018 at 4:26 AM Pedro Larroy 
wrote:

> Hi Hagay
>
> We are aware of this and we are working in this direction which as you
> point out, is more desirable.
> There's a huge amount of non-trivial work that has gone into building these
> distribution packages from Sheng which needs to be adapted for our CI
> system, and taken into consideration.
>
> Pedro.
>
>
> On Mon, Jul 30, 2018 at 9:07 AM Hagay Lupesko  wrote:
>
> > Thanks Tong for root-causing the issue!
> > Thanks Sheng for following up with an updated PyPi package.
> >
> > What worries me is that we seem to build MXNet PyPi distribution packages
> > with a build config different than the CI where all of the tests are
> > running.
> > Looking here [1
> > <
> >
> https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh
> > >]
> > it seems that MXNet CI Ubuntu build uses libopenblas-dev v0.2.18, while
> > PyPi build for MXNet 1.2.1 used v0.3.2 (I would imaging PyPi
> distribution?)
> >
> > Needless to say that if we don't make sure PyPi distribution is aligned
> > with the CI build, similar issues can happen again with other
> dependencies.
> > I'd think we want the build configs to be the same, or better yet have
> the
> > PyPi package be built from the output produced by the CI.
> > Thoughts?
> >
> > [1]
> >
> >
> https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh
> >
> >
> > On Fri, Jul 27, 2018 at 11:31 AM Sheng Zha  wrote:
> >
> > > Tong,
> > >
> > > That's great news. I'm glad that OpenBLAS people are responding so
> > quickly.
> > > In that case it's probably a better idea to use that version instead.
> The
> > > latest OpenBLAS version brings many optimization for all kinds of
> > hardware.
> > >
> > > -sz
> > >
> > > On Fri, Jul 27, 2018 at 11:10 AM, Tong He  wrote:
> > >
> > > > Hi Sheng,
> > > >
> > > > I also opened an issue on OpenBLAS repo:
> > > > https://github.com/xianyi/OpenBLAS/issues/1700 .
> > > >
> > > > As informed that "0.3.2 should be released this weekend", I tested
> > their
> > > > develope branch as well, and seems the new version has fixed the bug.
> > > >
> > > > Since OpenBLAS 0.3.2 could also have performance improvement,
> > therefore I
> > > > propose to wait for OpenBLAS 0.3.2 for our pip post release.
> > > >
> > > >
> > > > Best regards,
> > > >
> > > > Tong He
> > > >
> > > > 2018-07-27 10:54 GMT-07:00 Sheng Zha :
> > > >
> > > > > Forgot to mention, the post release version is a pip package
> version.
> > > > >
> > > > > -sz
> > > > >
> > > > > > On Jul 27, 2018, at 10:42 AM, Sheng Zha 
> > wrote:
> > > > > >
> > > > > > In this case we can regard it as a release problem, which is
> > usually
> > > > > what post release versions are for. It’s still the same release
> with
> > > > > different dependency, so there is no code change needed.
> > > > > >
> > > > > > -sz
> > > > > >
> > > > > >
> > > > > >> On Jul 27, 2018, at 8:31 AM, Steffen Rochel <
> > > steffenroc...@gmail.com>
> > > > > wrote:
> > > > > >>
> > > > > >> Hi Tong - thanks for root causing the problem.
> > > > > >> Sheng - what is 1.2.1.post0? Shouldn't a patch with fix be
> > released
> > > as
> > > > > >> 1.2.2?
> > > > > >> Steffen
> > > > > >>
> > > > > >>> On Thu, Jul 26, 2018 at 5:33 PM Sheng Zha 
> > > > wrote:
> > > > > >>>
> > > > > >>> Dear users and developers of Apache MXNet (Incubating),
> > > > > >>>
> > > > > >>> Thanks to Tong's dedication, the root cause for this issue was
> > > > > identified
> > > > > >>> to be instability in OpenBLAS's latest stable version 0.3.1.
> For
> > > > > details,
> > > > > >>> see Tong's comment

Re: Release blocker: non-determinstic forward in gluon

2018-07-30 Thread Hagay Lupesko
Thanks Tong for root-causing the issue!
Thanks Sheng for following up with an updated PyPi package.

What worries me is that we seem to build MXNet PyPi distribution packages
with a build config different than the CI where all of the tests are
running.
Looking here [1
]
it seems that MXNet CI Ubuntu build uses libopenblas-dev v0.2.18, while
PyPi build for MXNet 1.2.1 used v0.3.2 (I would imaging PyPi distribution?)

Needless to say that if we don't make sure PyPi distribution is aligned
with the CI build, similar issues can happen again with other dependencies.
I'd think we want the build configs to be the same, or better yet have the
PyPi package be built from the output produced by the CI.
Thoughts?

[1]
https://github.com/apache/incubator-mxnet/blob/master/ci/docker/install/ubuntu_core.sh


On Fri, Jul 27, 2018 at 11:31 AM Sheng Zha  wrote:

> Tong,
>
> That's great news. I'm glad that OpenBLAS people are responding so quickly.
> In that case it's probably a better idea to use that version instead. The
> latest OpenBLAS version brings many optimization for all kinds of hardware.
>
> -sz
>
> On Fri, Jul 27, 2018 at 11:10 AM, Tong He  wrote:
>
> > Hi Sheng,
> >
> > I also opened an issue on OpenBLAS repo:
> > https://github.com/xianyi/OpenBLAS/issues/1700 .
> >
> > As informed that "0.3.2 should be released this weekend", I tested their
> > develope branch as well, and seems the new version has fixed the bug.
> >
> > Since OpenBLAS 0.3.2 could also have performance improvement, therefore I
> > propose to wait for OpenBLAS 0.3.2 for our pip post release.
> >
> >
> > Best regards,
> >
> > Tong He
> >
> > 2018-07-27 10:54 GMT-07:00 Sheng Zha :
> >
> > > Forgot to mention, the post release version is a pip package version.
> > >
> > > -sz
> > >
> > > > On Jul 27, 2018, at 10:42 AM, Sheng Zha  wrote:
> > > >
> > > > In this case we can regard it as a release problem, which is usually
> > > what post release versions are for. It’s still the same release with
> > > different dependency, so there is no code change needed.
> > > >
> > > > -sz
> > > >
> > > >
> > > >> On Jul 27, 2018, at 8:31 AM, Steffen Rochel <
> steffenroc...@gmail.com>
> > > wrote:
> > > >>
> > > >> Hi Tong - thanks for root causing the problem.
> > > >> Sheng - what is 1.2.1.post0? Shouldn't a patch with fix be released
> as
> > > >> 1.2.2?
> > > >> Steffen
> > > >>
> > > >>> On Thu, Jul 26, 2018 at 5:33 PM Sheng Zha 
> > wrote:
> > > >>>
> > > >>> Dear users and developers of Apache MXNet (Incubating),
> > > >>>
> > > >>> Thanks to Tong's dedication, the root cause for this issue was
> > > identified
> > > >>> to be instability in OpenBLAS's latest stable version 0.3.1. For
> > > details,
> > > >>> see Tong's comment
> > > >>> <
> > > >>> https://github.com/apache/incubator-mxnet/issues/11853#
> > > issuecomment-408272772
> > > 
> > > >>> .
> > > >>>
> > > >>> Since both the nightly build and the 1.2.1 wheels are affected, we
> > > >>> recommend that we stay on OpenBLAS last known stable version 0.2.20
> > > that
> > > >>> we've been using. I will assume lazy consensus and prepare the fix
> > > >>> (1.2.1.post0).
> > > >>>
> > > >>> -sz
> > > >>>
> > >  On Tue, Jul 24, 2018 at 3:35 PM, Tong He  wrote:
> > > 
> > >  Recently there's an issue regarding the inconsistent result from
> > gluon
> > >  forward:
> > > 
> > >  https://github.com/apache/incubator-mxnet/issues/11853
> > > 
> > >  Given a constant input image and loaded pretrained parameters, we
> > > expect
> > > >>> a
> > >  deterministic output from arbitrary repeats of forwards. However
> > from
> > > the
> > >  issue I see that the forwarded result is non-determinstic. It is
> > > harmful
> > > >>> as
> > >  it makes the results from experments/benchmarks/inference
> > > meaningless.
> > > 
> > >  Therefore I propose to block the 1.3 release before it gets
> > resolved.
> > > 
> > > >>>
> > >
> >
>


Re: Pylint Undefined variable/name error

2018-07-26 Thread Hagay Lupesko
+1 - thanks Vandana!

On Thu, Jul 26, 2018 at 8:05 PM sandeep krishnamurthy <
sandeep.krishn...@gmail.com> wrote:

> Thanks Vandana for picking up this issue. I think this is important to be
> fixed and enabled in CI. Please let us know effort required to fix these
> issues and we all can jump in and help you.
>
> On Thu, Jul 26, 2018, 7:27 PM Vandana Kannan  wrote:
>
> > Hi All,
> >
> > On enabling the option "undefined-variable" in pylint (in pylintrc) and
> > executing on the latest code, 52 errors show up (most of them from the
> > example folder). These could lead to Python NameError at runtime. The
> > errors are documented in
> > https://github.com/apache/incubator-mxnet/issues/11904.
> >
> > Currently, this Pylint option is disabled in CI and pylint is not
> executed
> > on the example folder.
> >
> > It might be better to enable this option in CI to catch these errors
> early
> > on, and also work on fixing the errors. Any thoughts/suggestions?
> >
> > Thanks,
> > Vandana
> >
>


Re: MXNet Meetup in San Francisco - Aug 1 2018

2018-07-23 Thread Hagay Lupesko
No, unfortunately we're not planning to stream or record the meetup
(equipment/budget)

On Fri, Jul 20, 2018 at 9:36 AM Ivan Serdyuk 
wrote:

> Are you providing streaming or recording?
>
> On Fri, Jul 20, 2018 at 7:27 PM, Hagay Lupesko  wrote:
>
> > Hey folks,
> >
> > Sandeep and me are hosting a meetup in San Francisco Wednesday Aug 1 2018
> > on "Emotion recognition in images: from idea to production".
> > The details are here:
> > https://www.meetup.com/deep-learning-with-mxnet/events/252916863/ - note
> > that there is limited capacity, so if you are interested please RSVP.
> >
> > The community is welcomed to join us, will be a great opportunity to
> mingle
> > and get to know one another in person.
> >
> > Cheers,
> > Hagay
> >
>


MXNet Meetup in San Francisco - Aug 1 2018

2018-07-20 Thread Hagay Lupesko
Hey folks,

Sandeep and me are hosting a meetup in San Francisco Wednesday Aug 1 2018
on "Emotion recognition in images: from idea to production".
The details are here:
https://www.meetup.com/deep-learning-with-mxnet/events/252916863/ - note
that there is limited capacity, so if you are interested please RSVP.

The community is welcomed to join us, will be a great opportunity to mingle
and get to know one another in person.

Cheers,
Hagay


Re: FW: Success at Apache: The Apache Way for Executives

2018-07-12 Thread Hagay Lupesko
Could not agree more Yasser - thanks for sharing!

On Tue, Jul 10, 2018 at 11:04 AM Tianqi Chen 
wrote:

> Totally agree with what being said here, as community strives to move
> forward it is important to be inclusive and communicative.  The same
> principle also applies beyond this mail-list, as we also need be inclusive
> and welcoming to contributors who contribute via github, write issues and
> use discuss forums.
>
> Tianqi
>
> On Mon, Jul 9, 2018 at 9:35 PM, Yasser Zamani 
> wrote:
>
> > I thought these could be great for our community so I shared them here.
> >
> > "The most important and first lesson I learned from the Apache Community
> > was to avoid short term gains that were unsustainable in the long term.
> > This very important core principle derives in part from the concept of
> > "community over code". It does not matter how much code you write, or how
> > good your code is if you cannot get along, compromise, and communicate
> > respectfully with your peers. The code does not write itself, its the
> > community behind it that keeps the code alive." Alex Karasulu, an
> > entrepreneur with over 25 years of experience said.
> >
> > Best Regards.
> >
> > >-Original Message-
> > >From: Sally Khudairi 
> > >Sent: Monday, July 9, 2018 8:00 PM
> > >To: Apache Announce List 
> > >Subject: Success at Apache: The Apache Way for Executives
> > >
> > >[this post is available online at https://s.apache.org/2Wg8 ]
> > >
> > >by Alex Karasulu
> > >
> > >I'm a long time member of the Apache Software Foundation and have been
> an
> > >executive officer of several corporations over the course of the past 20
> > years.
> > >I've co-founded several projects in the community and mentored several
> > others.
> > >
> > >The "Apache Way" has benefited several aspects of my life, however I
> never
> > >imagined it would help make me a better executive. Even non-technical
> > >executives, in organizations totally outside of the realm of technology,
> > can
> > >benefit from the Zen of the Apache Way.
> > >
> > >Life is hard when you're stupid
> > >
> > >I was involved in a number of early dot com startups as an executive,
> > however
> > >that was before my involvement with Apache and long before any exposure
> to
> > >the Apache Way. To this day, I remember how opportunistic decisions for
> > short
> > >term gains, the lack of collaboration, openness and communication kept
> > causing
> > >friction that made my job and ultimately my life much harder than it had
> > to be.
> > >
> > >Learning while on the job
> > >
> > >Exposure to the philosophy began early even while lurking on mailing
> > lists but
> > >picked up more while incubating the Apache Directory Project where I
> > worked
> > >with others to grow an active community. Meanwhile, I was the Chief
> > >Technology Officer of a large financial services company called Alliance
> > Capital
> > >Partners. It was 2002, and the first time I had to conduct myself as a
> > C-Suite
> > >executive in an enterprise that was obviously not a technology company.
> > >Incidentally, the lack of hands-on coding got me working on a pet
> project
> > that
> > >ultimately became the Apache Directory Server and Apache MINA. The
> project
> > >was medicine to keep me sane and technically up to date. Unbeknownst to
> > me,
> > >this would save my career, not as a developer, but as an executive.
> > >
> > >The Apache Way makes life easier
> > >
> > >The most important and first lesson I learned from the Apache Community
> > was to
> > >avoid short term gains that were unsustainable in the long term. This
> very
> > >important core principle derives in part from the concept of "community
> > over
> > >code". It does not matter how much code you write, or how good your code
> > is if
> > >you cannot get along, compromise, and communicate respectfully with your
> > >peers. The code does not write itself, its the community behind it that
> > keeps the
> > >code alive. Involving only the most technically proficient contributors
> > should
> > >never trump the need to build a sustainable community. I saw projects
> > often
> > >suffer from self-centered yet skilled coders added as committers for
> > short term
> > >gain at the detriment of a healthy sustainable community. So as a
> > corollary to
> > >community over code, avoid short term gains that get in the way of the
> > long term
> > >sustainability of an organization's culture. This has immense
> > applications for any
> > >executive in both technical and non-technical fields.
> > >
> > >While growing my new development organization in this financial services
> > >organization, I decided to avoid hiring people that seemed to be very
> > skilled
> > >technically but lacked the desire or social skills to collaborate with
> > others. Thanks
> > >to experiences at Apache, I could start telling them apart much better
> > than I did
> > >before. Also, I was calmer and less anxious when hiring to fill gaps on
> > the team. It
> 

Re: C++ api issue labeling

2018-07-12 Thread Hagay Lupesko
+1 to combining feature and feature request

On Thu, Jul 12, 2018 at 12:37 AM Marco de Abreu
 wrote:

> +1 to combining feature and feature request
>
> Haibin Lin  schrieb am Do., 12. Juli 2018,
> 10:15:
>
> > +1 merging "feature" with "feature request"
> >
> > On Tue, Jul 10, 2018 at 12:59 PM, Anirudh Acharya  >
> > wrote:
> >
> > > There is another instance of label duplication - We have labels
> > "Feature" (
> > > https://github.com/apache/incubator-mxnet/labels/Feature ) and
> "Feature
> > > Request" (
> > > https://github.com/apache/incubator-mxnet/labels/Feature%20request ).
> I
> > > don't think there is much difference between these two labels.
> > >
> > > It would make sense to merge the "Feature" label into "Feature
> Request".
> > >
> > >
> > > Thanks
> > > Anirudh
> > >
> > >
> > > On Wed, Jun 27, 2018 at 3:50 PM Hagay Lupesko 
> wrote:
> > >
> > > > Thank you everyone for your suggestions.
> > > > I will work with a committer to get this updated ASAP.
> > > >
> > > > On Mon, Jun 25, 2018 at 8:55 AM Marco de Abreu
> > > >  wrote:
> > > >
> > > > > +1 to renaming to Backend
> > > > >
> > > > > On Mon, Jun 25, 2018 at 10:13 AM Hagay Lupesko 
> > > > wrote:
> > > > >
> > > > > > Thanks Lin for your feedback.
> > > > > > Bumping again to get more feedback before concluding.
> > > > > >
> > > > > > On Fri, Jun 22, 2018 at 8:53 AM Lin Yuan 
> > > wrote:
> > > > > >
> > > > > > > I agree with Hagay. Using "Backend" as label makes it much
> easier
> > > to
> > > > > > track.
> > > > > > >  "C++" label only describes the language used in
> implementation,
> > > > > > "Backend"
> > > > > > > better describes the nature of the work (let's assume we change
> > the
> > > > > > backend
> > > > > > > implementation from C++ to other languages in the future).
> > > > > > >
> > > > > > > Lin
> > > > > > >
> > > > > > > On Fri, Jun 22, 2018 at 1:09 AM Hagay Lupesko <
> lupe...@gmail.com
> > >
> > > > > wrote:
> > > > > > >
> > > > > > > > Thanks everyone for chiming in and clarifying.
> > > > > > > > It seems that the "C++" label name is confusing for our
> > community
> > > > > since
> > > > > > > it
> > > > > > > > can be interpreted as both the CPP API and the backend...
> > > > > > > > As an anecdote, this issue [1
> > > > > > > > <https://github.com/apache/incubator-mxnet/issues/10937>] is
> > > > labeled
> > > > > > as
> > > > > > > > "C++" but is about the CPP API, not the backend.
> > > > > > > >
> > > > > > > > Should we just rename "C++" to "Backend" to avoid confusion?
> > > > > > > >
> > > > > > > > [1] https://github.com/apache/incubator-mxnet/issues/10937
> > > > > > > >
> > > > > > > > On Thu, Jun 21, 2018 at 12:39 PM Pedro Larroy <
> > > > > > > > pedro.larroy.li...@gmail.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Agree with Anirudh, they are different things. Maybe change
> > the
> > > > > "C++"
> > > > > > > > label
> > > > > > > > > to "backend" would be more informative?
> > > > > > > > >
> > > > > > > > > On Thu, Jun 21, 2018 at 12:11 PM Anirudh <
> > > anirudh2...@gmail.com>
> > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Hi Hagay,
> > > > > > > > > >
> > > > > > > > > > I think we should keep these two labels seperate since
> they
> > > > mean
> > > > > > > > > different
> > > > > > > > > > things.
> > > > > > > > > > The C++ label refers to the issue for MXNet backend and
> the
> > > CPP
> > > > > > > package
> > > > > > > > > > refers to the CPP language binding for mxnet.
> > > > > > > > > > We can still make C++ API great again irrespective by
> > > filtering
> > > > > out
> > > > > > > CPP
> > > > > > > > > > package issues :).
> > > > > > > > > >
> > > > > > > > > > Anirudh
> > > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > On Thu, Jun 21, 2018 at 11:56 AM, Hagay Lupesko <
> > > > > lupe...@gmail.com
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > > >
> > > > > > > > > > > Hey community,
> > > > > > > > > > >
> > > > > > > > > > > I was going over the open GitHub issues for MXNet, and
> > > > noticed
> > > > > > that
> > > > > > > > we
> > > > > > > > > > have
> > > > > > > > > > > two labels for the CPP API: "CPP package", "C++"
> > > > > > > > > > >
> > > > > > > > > > > Wanted to suggest we remove "CPP package" and just
> stick
> > to
> > > > > "C++"
> > > > > > > > > > > This will make it easier for the community to classify
> > > issues
> > > > > and
> > > > > > > > focus
> > > > > > > > > > on
> > > > > > > > > > > making the C++ API great again ;)
> > > > > > > > > > >
> > > > > > > > > > > Let me know if someone has any concerns, otherwise I
> will
> > > > find
> > > > > a
> > > > > > > > > > committer
> > > > > > > > > > > that I can work with to make this change.
> > > > > > > > > > >
> > > > > > > > > > > Thanks!
> > > > > > > > > > > Hagay
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Adding section on how to develop with MXNet to the website

2018-07-02 Thread Hagay Lupesko
Can we have the website at least point to the wiki? makes it more
discoverable...

On Mon, Jul 2, 2018 at 2:10 PM Markham, Aaron 
wrote:

> This is the section where development guides are being maintained:
> https://cwiki.apache.org/confluence/display/MXNET/Development
>
> That way you can edit more freely and have comments versus the mxnet.io
> site which is more for users and static content.
>
>
> On 7/2/18, 2:03 PM, "Hagay Lupesko"  wrote:
>
> Now I understand Pedro. Agree with Naveen this would be helpful.
>
> On Sun, Jul 1, 2018 at 12:34 AM Naveen Swamy 
> wrote:
>
> > No please add, this would be super helpful. I have struggled with
> this and
> > thanks for the help you offered offline. If you can please make a
> small
> > screencast
> >
> > > On Jul 1, 2018, at 12:07 AM, Pedro Larroy <
> pedro.larroy.li...@gmail.com>
> > wrote:
> > >
> > > Hi Hagay
> > >
> > > Meaning how to set the developer environment like CLion to debug
> MXNet,
> > how
> > > to build and run tests etc. Or is this already documented?
> > >
> > > Pedro.
> > >
> > >> On Tue, Jun 26, 2018 at 6:14 AM Hagay Lupesko 
> > wrote:
> > >>
> > >> Pedro,
> > >>
> > >> Anything that helps bring in more contributors is good IMO.
> > >> But can you please clarify what you mean by "develop MXNet
> itself"?
> > >>
> > >> Hagay
> > >>
> > >> On Mon, Jun 25, 2018, 19:33 Markham, Aaron
>  > >
> > >> wrote:
> > >>
> > >>> More or less... Instructions are in the readme in the docs
> folder.
> > Focus
> > >>> on the developer sections. Dependencies and other info is
> provided.
> > Link
> > >>> to your info from the contribute page that's under community.
> > >>>
> > >>> Ping me if you need help.
> > >>>
> > >>> Sent from VMware Boxer
> > >>>
> > >>> On Jun 25, 2018 19:17, Pedro Larroy <
> pedro.larroy.li...@gmail.com>
> > >> wrote:
> > >>> Hi
> > >>>
> > >>> I want to add a section on how to develop MXNet itself to attract
> > >>> contributors. Would this be acceptable for the website?
> > >>>
> > >>> Is there any recommended workflow for this?   Any tools?
> > >>>
> > >>> is it going into docs and `make html` or something else?
> > >>>
> > >>> Thanks.
> > >>>
> > >>> Pedro
> > >>>
> > >>
> >
>
>
>


Re: Adding section on how to develop with MXNet to the website

2018-07-02 Thread Hagay Lupesko
Now I understand Pedro. Agree with Naveen this would be helpful.

On Sun, Jul 1, 2018 at 12:34 AM Naveen Swamy  wrote:

> No please add, this would be super helpful. I have struggled with this and
> thanks for the help you offered offline. If you can please make a small
> screencast
>
> > On Jul 1, 2018, at 12:07 AM, Pedro Larroy 
> wrote:
> >
> > Hi Hagay
> >
> > Meaning how to set the developer environment like CLion to debug MXNet,
> how
> > to build and run tests etc. Or is this already documented?
> >
> > Pedro.
> >
> >> On Tue, Jun 26, 2018 at 6:14 AM Hagay Lupesko 
> wrote:
> >>
> >> Pedro,
> >>
> >> Anything that helps bring in more contributors is good IMO.
> >> But can you please clarify what you mean by "develop MXNet itself"?
> >>
> >> Hagay
> >>
> >> On Mon, Jun 25, 2018, 19:33 Markham, Aaron  >
> >> wrote:
> >>
> >>> More or less... Instructions are in the readme in the docs folder.
> Focus
> >>> on the developer sections. Dependencies and other info is provided.
> Link
> >>> to your info from the contribute page that's under community.
> >>>
> >>> Ping me if you need help.
> >>>
> >>> Sent from VMware Boxer
> >>>
> >>> On Jun 25, 2018 19:17, Pedro Larroy 
> >> wrote:
> >>> Hi
> >>>
> >>> I want to add a section on how to develop MXNet itself to attract
> >>> contributors. Would this be acceptable for the website?
> >>>
> >>> Is there any recommended workflow for this?   Any tools?
> >>>
> >>> is it going into docs and `make html` or something else?
> >>>
> >>> Thanks.
> >>>
> >>> Pedro
> >>>
> >>
>


Re: MXNet 1.3 release - call for volunteers

2018-07-02 Thread Hagay Lupesko
Thank you Roshani!

On Fri, Jun 29, 2018 at 2:30 PM Roshani Nagmote 
wrote:

> Hi,
>
> I would like to volunteer. But as I am not a committer, I would need help
> from some committer.
> Please let me know if anyone is interested to help me with the release.
>
> Thanks,
> Roshani
>
> On Wed, Jun 27, 2018 at 5:52 PM Hagay Lupesko  wrote:
>
> > Hello community!
> >
> > I'd like to kickstart the process for MXNet v1.3 release - and ask for a
> > volunteer to take on this release as a release manager. I am hoping the
> > release process can start next week or so.
> > The release scope is documented here: [1
> > <
> >
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> > >
> > ]
> > The release process is documented here: [2]
> >
> > Some of the involved tasks require committer privileges, and I can help
> > identifying a committer that will be available to help and mentor the
> > release manager. This is a great opportunity for someone to contribute,
> > ramp up further on the project, and help get the latest and greatest out
> to
> > MXNet users.
> >
> > If you are interested - please reply and let me know!
> >
> > Thanks, Hagay
> >
> > [1]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
> > [2]
> >
> >
> https://cwiki.apache.org/confluence/display/MXNET/Release+Process?src=contextnavpagetreemode
> >
>


Re: Merging Clojure PR

2018-06-28 Thread Hagay Lupesko
Thanks for your contribution Carin!
Unfortunately i can't do a proper review for Clojure, but it's great to see
the contribution and see how it develops and grows...

On Thu, Jun 28, 2018 at 11:11 AM Carin Meier  wrote:

> Thanks everyone for your feedback and efforts with the Clojure package PR.
>
> I'm delighted to join the MXNet community and work with you all and am
> excited to invite the Clojure community to grow with it :)
>
> Thanks,
> Carin
>
> On Thu, Jun 28, 2018 at 1:38 PM, Pedro Larroy <
> pedro.larroy.li...@gmail.com>
> wrote:
>
> > Yes, great work Carin! I even saw your book on Clojure autographed.
> >
> > Pedro.
> >
> > On Wed, Jun 27, 2018 at 7:24 PM Naveen Swamy  wrote:
> >
> > > Hi All,
> > >
> > > Carin (https://github.com/gigasquid) has done a worked on a Clojure
> > MXNet
> > > package for the Clojure community, Thank you Carin.
> > >
> > > I would like to merge this PR#11205  for the upcoming release 1.3. I am
> > not
> > > a Clojure developer and plan to just do a preliminary review for
> > licenses,
> > > tests, etc., and merge the code. Myself and Pedro called on help from
> the
> > > Clojure community in our day job at Amazon and also Carin's also got
> some
> > > peers from the Clojure community to help with the review.
> > >
> > > If there is a committer who would like to do a complete review, I'll be
> > > happy to step back and let you do it otherwise this PR is going in by
> the
> > > end of week to make it ready for 1.3.
> > >
> > > https://github.com/apache/incubator-mxnet/pull/11205
> > >
> > > Let me know
> > >
> > > Thanks, Naveen
> > >
> >
>


MXNet 1.3 release - call for volunteers

2018-06-27 Thread Hagay Lupesko
Hello community!

I'd like to kickstart the process for MXNet v1.3 release - and ask for a
volunteer to take on this release as a release manager. I am hoping the
release process can start next week or so.
The release scope is documented here: [1

]
The release process is documented here: [2]

Some of the involved tasks require committer privileges, and I can help
identifying a committer that will be available to help and mentor the
release manager. This is a great opportunity for someone to contribute,
ramp up further on the project, and help get the latest and greatest out to
MXNet users.

If you are interested - please reply and let me know!

Thanks, Hagay

[1]
https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release
[2]
https://cwiki.apache.org/confluence/display/MXNET/Release+Process?src=contextnavpagetreemode


Re: C++ api issue labeling

2018-06-27 Thread Hagay Lupesko
Thank you everyone for your suggestions.
I will work with a committer to get this updated ASAP.

On Mon, Jun 25, 2018 at 8:55 AM Marco de Abreu
 wrote:

> +1 to renaming to Backend
>
> On Mon, Jun 25, 2018 at 10:13 AM Hagay Lupesko  wrote:
>
> > Thanks Lin for your feedback.
> > Bumping again to get more feedback before concluding.
> >
> > On Fri, Jun 22, 2018 at 8:53 AM Lin Yuan  wrote:
> >
> > > I agree with Hagay. Using "Backend" as label makes it much easier to
> > track.
> > >  "C++" label only describes the language used in implementation,
> > "Backend"
> > > better describes the nature of the work (let's assume we change the
> > backend
> > > implementation from C++ to other languages in the future).
> > >
> > > Lin
> > >
> > > On Fri, Jun 22, 2018 at 1:09 AM Hagay Lupesko 
> wrote:
> > >
> > > > Thanks everyone for chiming in and clarifying.
> > > > It seems that the "C++" label name is confusing for our community
> since
> > > it
> > > > can be interpreted as both the CPP API and the backend...
> > > > As an anecdote, this issue [1
> > > > <https://github.com/apache/incubator-mxnet/issues/10937>] is labeled
> > as
> > > > "C++" but is about the CPP API, not the backend.
> > > >
> > > > Should we just rename "C++" to "Backend" to avoid confusion?
> > > >
> > > > [1] https://github.com/apache/incubator-mxnet/issues/10937
> > > >
> > > > On Thu, Jun 21, 2018 at 12:39 PM Pedro Larroy <
> > > > pedro.larroy.li...@gmail.com>
> > > > wrote:
> > > >
> > > > > Agree with Anirudh, they are different things. Maybe change the
> "C++"
> > > > label
> > > > > to "backend" would be more informative?
> > > > >
> > > > > On Thu, Jun 21, 2018 at 12:11 PM Anirudh 
> > > wrote:
> > > > >
> > > > > > Hi Hagay,
> > > > > >
> > > > > > I think we should keep these two labels seperate since they mean
> > > > > different
> > > > > > things.
> > > > > > The C++ label refers to the issue for MXNet backend and the CPP
> > > package
> > > > > > refers to the CPP language binding for mxnet.
> > > > > > We can still make C++ API great again irrespective by filtering
> out
> > > CPP
> > > > > > package issues :).
> > > > > >
> > > > > > Anirudh
> > > > > >
> > > > > >
> > > > > > On Thu, Jun 21, 2018 at 11:56 AM, Hagay Lupesko <
> lupe...@gmail.com
> > >
> > > > > wrote:
> > > > > >
> > > > > > > Hey community,
> > > > > > >
> > > > > > > I was going over the open GitHub issues for MXNet, and noticed
> > that
> > > > we
> > > > > > have
> > > > > > > two labels for the CPP API: "CPP package", "C++"
> > > > > > >
> > > > > > > Wanted to suggest we remove "CPP package" and just stick to
> "C++"
> > > > > > > This will make it easier for the community to classify issues
> and
> > > > focus
> > > > > > on
> > > > > > > making the C++ API great again ;)
> > > > > > >
> > > > > > > Let me know if someone has any concerns, otherwise I will find
> a
> > > > > > committer
> > > > > > > that I can work with to make this change.
> > > > > > >
> > > > > > > Thanks!
> > > > > > > Hagay
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Adding section on how to develop with MXNet to the website

2018-06-25 Thread Hagay Lupesko
Pedro,

Anything that helps bring in more contributors is good IMO.
But can you please clarify what you mean by "develop MXNet itself"?

Hagay

On Mon, Jun 25, 2018, 19:33 Markham, Aaron 
wrote:

> More or less... Instructions are in the readme in the docs folder. Focus
> on the developer sections. Dependencies and other info is provided.  Link
> to your info from the contribute page that's under community.
>
> Ping me if you need help.
>
> Sent from VMware Boxer
>
> On Jun 25, 2018 19:17, Pedro Larroy  wrote:
> Hi
>
> I want to add a section on how to develop MXNet itself to attract
> contributors. Would this be acceptable for the website?
>
> Is there any recommended workflow for this?   Any tools?
>
> is it going into docs and `make html` or something else?
>
> Thanks.
>
> Pedro
>


Re: C++ api issue labeling

2018-06-25 Thread Hagay Lupesko
Thanks Lin for your feedback.
Bumping again to get more feedback before concluding.

On Fri, Jun 22, 2018 at 8:53 AM Lin Yuan  wrote:

> I agree with Hagay. Using "Backend" as label makes it much easier to track.
>  "C++" label only describes the language used in implementation, "Backend"
> better describes the nature of the work (let's assume we change the backend
> implementation from C++ to other languages in the future).
>
> Lin
>
> On Fri, Jun 22, 2018 at 1:09 AM Hagay Lupesko  wrote:
>
> > Thanks everyone for chiming in and clarifying.
> > It seems that the "C++" label name is confusing for our community since
> it
> > can be interpreted as both the CPP API and the backend...
> > As an anecdote, this issue [1
> > <https://github.com/apache/incubator-mxnet/issues/10937>] is labeled as
> > "C++" but is about the CPP API, not the backend.
> >
> > Should we just rename "C++" to "Backend" to avoid confusion?
> >
> > [1] https://github.com/apache/incubator-mxnet/issues/10937
> >
> > On Thu, Jun 21, 2018 at 12:39 PM Pedro Larroy <
> > pedro.larroy.li...@gmail.com>
> > wrote:
> >
> > > Agree with Anirudh, they are different things. Maybe change the "C++"
> > label
> > > to "backend" would be more informative?
> > >
> > > On Thu, Jun 21, 2018 at 12:11 PM Anirudh 
> wrote:
> > >
> > > > Hi Hagay,
> > > >
> > > > I think we should keep these two labels seperate since they mean
> > > different
> > > > things.
> > > > The C++ label refers to the issue for MXNet backend and the CPP
> package
> > > > refers to the CPP language binding for mxnet.
> > > > We can still make C++ API great again irrespective by filtering out
> CPP
> > > > package issues :).
> > > >
> > > > Anirudh
> > > >
> > > >
> > > > On Thu, Jun 21, 2018 at 11:56 AM, Hagay Lupesko 
> > > wrote:
> > > >
> > > > > Hey community,
> > > > >
> > > > > I was going over the open GitHub issues for MXNet, and noticed that
> > we
> > > > have
> > > > > two labels for the CPP API: "CPP package", "C++"
> > > > >
> > > > > Wanted to suggest we remove "CPP package" and just stick to "C++"
> > > > > This will make it easier for the community to classify issues and
> > focus
> > > > on
> > > > > making the C++ API great again ;)
> > > > >
> > > > > Let me know if someone has any concerns, otherwise I will find a
> > > > committer
> > > > > that I can work with to make this change.
> > > > >
> > > > > Thanks!
> > > > > Hagay
> > > > >
> > > >
> > >
> >
>


Re: C++ api issue labeling

2018-06-22 Thread Hagay Lupesko
Thanks everyone for chiming in and clarifying.
It seems that the "C++" label name is confusing for our community since it
can be interpreted as both the CPP API and the backend...
As an anecdote, this issue [1
<https://github.com/apache/incubator-mxnet/issues/10937>] is labeled as
"C++" but is about the CPP API, not the backend.

Should we just rename "C++" to "Backend" to avoid confusion?

[1] https://github.com/apache/incubator-mxnet/issues/10937

On Thu, Jun 21, 2018 at 12:39 PM Pedro Larroy 
wrote:

> Agree with Anirudh, they are different things. Maybe change the "C++" label
> to "backend" would be more informative?
>
> On Thu, Jun 21, 2018 at 12:11 PM Anirudh  wrote:
>
> > Hi Hagay,
> >
> > I think we should keep these two labels seperate since they mean
> different
> > things.
> > The C++ label refers to the issue for MXNet backend and the CPP package
> > refers to the CPP language binding for mxnet.
> > We can still make C++ API great again irrespective by filtering out CPP
> > package issues :).
> >
> > Anirudh
> >
> >
> > On Thu, Jun 21, 2018 at 11:56 AM, Hagay Lupesko 
> wrote:
> >
> > > Hey community,
> > >
> > > I was going over the open GitHub issues for MXNet, and noticed that we
> > have
> > > two labels for the CPP API: "CPP package", "C++"
> > >
> > > Wanted to suggest we remove "CPP package" and just stick to "C++"
> > > This will make it easier for the community to classify issues and focus
> > on
> > > making the C++ API great again ;)
> > >
> > > Let me know if someone has any concerns, otherwise I will find a
> > committer
> > > that I can work with to make this change.
> > >
> > > Thanks!
> > > Hagay
> > >
> >
>


C++ api issue labeling

2018-06-21 Thread Hagay Lupesko
Hey community,

I was going over the open GitHub issues for MXNet, and noticed that we have
two labels for the CPP API: "CPP package", "C++"

Wanted to suggest we remove "CPP package" and just stick to "C++"
This will make it easier for the community to classify issues and focus on
making the C++ API great again ;)

Let me know if someone has any concerns, otherwise I will find a committer
that I can work with to make this change.

Thanks!
Hagay


Re: users@mxnet

2018-06-19 Thread Hagay Lupesko
Jim,

Earlier on the thread you suggested to clarify and expand on the usage of a
user@ mailing list and how it is useful for a project.

It may be helpful for the community to learn a bit more about it. Could you
expand and/or share relevant links and examples?

Thank you,
Hagay

On Tue, Jun 19, 2018, 07:31 Jim Jagielski  wrote:

> Just so we are clear: building and fostering a community takes effort.
> Either it is something important to the project, or it's not.
>
> My assumption is that It Is.
>
> > On Jun 18, 2018, at 8:59 PM, YiZhi Liu  wrote:
> >
> > I am personally not a big fan of mailing list but agree with Thomas
> > that we may get extra users, which worth a try.
> > On the other hand, I also have concern that we do not have a community
> > big enough to support multiple forums. If people asked questions but
> > got no response, that can be worse than not having the mailing list at
> > all.
> > On Mon, Jun 18, 2018 at 5:46 PM Thomas DELTEIL
> >  wrote:
> >>
> >> I was actually the one stating that we didn't need a user mailing list
> >> during the Seattle meetup, given all the reasons already exposed above.
> >>
> >> However given what proponents of a mailing list said, I personally
> wouldn't
> >> mind adding a new channel as a user mailing list, and monitoring it.
> There
> >> seems to be a subset of users, used to apache projects, that wouldn't
> use
> >> the forum but would use a mailing list. Though I think it is not as
> >> feature-rich as the forum and there is a risk of dilution of
> information.
> >> It is more about reaching those extra users. If we see a dilution of
> >> traffic on the forum towards the mailing list (~currently 100
> posts/week)
> >> then maybe we can reconsider our assumptions?
> >>
> >> All the best,
> >>
> >> Thomas Delteil
> >>
> >> On Mon, Jun 18, 2018, 17:30 Pedro Larroy 
> >> wrote:
> >>
> >>> I agree with Tianqi, Eric and others. We shouldn't dilute the community
> >>> with another forum. Disqus is already working and has healthy
> >>> participation, you can get an email digest if you so desire.
> Subscribing to
> >>> a mailing list to get a question answered is quite a heavyweight
> investment
> >>> for many people and users who might not have the resources nor mental
> >>> bandwidth to receive more email volume in their inboxes.
> >>>
> >>> On Mon, Jun 18, 2018 at 10:19 AM Tianqi Chen  >
> >>> wrote:
> >>>
>  The problem of having multiple separate channels of communication is
> that
>  users get confused, and the cost of maintenance goes up(people have to
>  watch both). As the current community was at discuss forum and many
> users
>  prefer it, having a mail-list is only a burden we will bring
> 
>  Tianqi
> 
>  On Mon, Jun 18, 2018 at 9:48 AM, Jim Jagielski 
> wrote:
> 
> > IMO, that is the wrong way to look at it.
> >
> > A users@ mailing list is a great, easy, low-cost and low-overhead
> way
> >>> of
> > *increasing* the user community and providing an extra level of
> >>> support.
> > Unless there is "strong evidence" that this is NOT the case, I would
> > recommend we create the list.
> >
> >> On Jun 16, 2018, at 12:28 AM, Tianqi Chen  >
> > wrote:
> >>
> >> So unless there is a strong evidence that our community users
> prefers
>  the
> >> mail-list, I would recommend we keep the current way
> >>
> >> Tianqi
> >>
> >> On Fri, Jun 15, 2018 at 9:25 PM, Sergio Fernández <
> wik...@apache.org
> 
> > wrote:
> >>
> >>> Are we targeting just Seattle as our community? I really hope we
> are
> >>> thinking a bit beyond that...
> >>>
> >>> On Fri, Jun 15, 2018, 21:22 Tianqi Chen 
> > wrote:
> >>>
>  I remember last time during the mxnet meetup in Seattle, we did a
> > survey,
>  and most users preferred the current discuss forum. So I would say
> >>> we
> >>> stick
>  with that given the user community prefers that
> 
>  Tianqi
> 
>  On Fri, Jun 15, 2018 at 9:13 PM, Sergio Fernández <
> >>> wik...@apache.org
> >
>  wrote:
> 
> > Then, if everybody agree, let's request the mailing list creation
> >>> to
>  INFRA
> > ;-)
> >
> > Marco, I wouldn't do that. Typically developers are also
> >>> subscribed
>  there,
> > since they may be the most informed people for answering users'
>  questions.
> > But the topics discussed there may not be of the interest for
> pure
> > development purposes. Some discussions will jump from users@ to
>  dev@,
>  but
> > at a different level. So I wouldn't forward one mailing list to
> >>> the
>  other.
> >
> > On Fri, Jun 15, 2018, 21:01 Marco de Abreu
> >  wrote:
> >
> >> I think nobody was opposed to it in the past, right?
> >>
> >> I'd propose 

Re: Additional mentor to MXNet - Jim Jagielski

2018-06-18 Thread Hagay Lupesko
That is great news!
Welcome on board Jim!


On Mon, Jun 18, 2018, 15:14 Naveen Swamy  wrote:

> Hi All,
>
> I am excited to announce that we have an additional mentor for MXNet: Jim
> Jagielski, the cofounder of Apache, he brings vast experience in building
> and growing successful communities around projects. I am sure we will be
> able to tap into his experience to bring alignment and build a strong
> community around MXNet.
>
> He has a page on wikipedia, check it out :)
> https://en.wikipedia.org/wiki/Jim_Jagielski.
>
> Thanks, Naveen
>


Re: Clojure Package

2018-06-18 Thread Hagay Lupesko
Carin,

Thanks for contributing so much to MXNet!
I just went through the planned MXNet v1.3 release scope [1], and did not
see the Clojure package. I think it would be a great addition to MXNet, and
initially can be marked as experimental.

Did you consider adding it to MXNet 1.3 as an experimental feature?

Hagay

[1]
https://cwiki.apache.org/confluence/display/MXNET/Project+Proposals+for+next+MXNet+Release


On Fri, Jun 15, 2018 at 5:52 AM Carin Meier  wrote:

> Kovas Boguta https://twitter.com/kovasb, from the Clojure/AI community,
> graciously took some time to review the PR for the clojure package.
>
> He had some insightful feedback and high level questions that I thought
> might be of interest to the larger dev mailing list.
>
> https://github.com/apache/incubator-mxnet/pull/11205
>
> Feel free to join in on the discussion here or on the PR.
>
> - Carin
>
> On Mon, Jun 11, 2018 at 6:49 PM, Carin Meier  wrote:
>
> > I'm fine with whatever process works best and what makes everyone the
> most
> > comfortable.
> >
> > I've completed one of the requests for code coverage and integrated the
> > cloverage, (https://github.com/cloverage/cloverage), plugin in the PR
> > branch. I've published the results to the confluence page to help with
> > transparency and give everyone a better idea where the level of testing
> is
> > currently.
> > https://cwiki.apache.org/confluence/display/MXNET/
> > Clojure+Package+Contribution+Needs
> >
> >
> >
> > On Mon, Jun 11, 2018 at 6:24 PM, Marco de Abreu <
> > marco.g.ab...@googlemail.com> wrote:
> >
> >> Exactly, that's what I'm proposing. Having the migration in multiple
> >> separate PRs which are done in sequence. This would mean that the
> initial
> >> PRs might not be tested.
> >> We could make the factors you mentioned acceptance criteria to be moved
> >> out
> >> of contrib.
> >>
> >> -Marco
> >>
> >> On Mon, Jun 11, 2018 at 3:17 PM Naveen Swamy 
> wrote:
> >>
> >> > I did not suggest to do it at once. I am not comfortable to merge code
> >> > without good tests, documentation and examples that is not to say
> >> Clojure
> >> > codebase does not have that.
> >> > All that you are saying can happen in separate PRs if you want to
> break
> >> it
> >> > up.
> >> >
> >> >
> >> > On Tue, Jun 12, 2018 at 12:07 AM, Marco de Abreu <
> >> > marco.g.ab...@googlemail.com> wrote:
> >> >
> >> > > The problem I see here is that the migration will have different
> type
> >> of
> >> > > challenges which should be handled isolated. Trying to solve them
> all
> >> at
> >> > > once will make it very lengthy and also hard to review. Considering
> >> Carin
> >> > > and her team this is doing this on a voluntary base, I'd like to
> keep
> >> the
> >> > > number of hoops to jump through per stage as small as possible and
> >> rather
> >> > > split it up into multiple efforts.
> >> > >
> >> > > If we would do everything at once, we'd have to involve a lot of
> >> people
> >> > and
> >> > > it would be hard to review. We'd need at least two or three
> reviewers
> >> > > involved in that process: You (or another committer familiar with
> >> Scala
> >> > to
> >> > > review the Scala part), Yi Zhi (general reviewer) and me (CI
> >> > integration).
> >> > > It would probably require even more committers for other stuff that
> >> comes
> >> > > up. It would rather be better to keep the parts that have to be
> >> touched
> >> > as
> >> > > isolated and few as possible.
> >> > > For example, after the code has been approved and merged in
> general, I
> >> > can
> >> > > assist with the CI integration. This would not require oversight
> from
> >> > other
> >> > > committers, so they'd be free. After that, we'd need somebody
> familiar
> >> > with
> >> > > the release process (probably Naveen) and I'd be free after that.
> >> Then we
> >> > > need general improvements which would also involve other people
> again.
> >> > > Trying to squeeze everything into a single stage is going to make it
> >> very
> >> > > hard in my opinion.
> >> > >
> >> > >
> >> > > -Marco
> >> > >
> >> > > On Mon, Jun 11, 2018 at 2:56 PM Naveen Swamy 
> >> wrote:
> >> > >
> >> > > > I disagree with your approach, We should bring features
> iteratively
> >> > well
> >> > > > tested and well documented. MXNet already has many different
> >> language
> >> > > > bindings which has quite a bit of tech-debt, I don't want just to
> >> add
> >> > > more
> >> > > > tech-debt to the code base with new language bindings as well.
> >> > > >
> >> > > > On Mon, Jun 11, 2018 at 11:17 PM, Marco de Abreu <
> >> > > > marco.g.ab...@googlemail.com> wrote:
> >> > > >
> >> > > > > I think we should try to separate this migration into multiple
> >> > stages:
> >> > > > > 1. Move into contrib
> >> > > > > 2. Migrate release pipeline
> >> > > > > 3. Migrate tests
> >> > > > > 4. Improvements
> >> > > > > 5. Mark as stable and announce Julia officially
> >> > > > >
> >> > > > > I know how much effort Carin and the other maintainers are
> putting
> 

Re: users@mxnet

2018-06-16 Thread Hagay Lupesko
Agree with Indu's points: email list usability and features seems inferior
compared to the discussion forum, so I would suggest to keep things simple
and stick with the forum.

Hagay

On Sat, Jun 16, 2018, 06:37 Timur Shenkao  wrote:

> user mail list
>
> Pros:
> - Apache user mail list is indexed and kept forever in mailing lists. Very
> convenient.
> - Apache user mail list is indexed by search engines actively and info
> appears in search results pretty soon.
> - You just get e-mails and when you have spare time read & answer them.
>
> Cons:
> - Unless there are active people, user mail list may become "cemetery" of
> unanswered questions
>
>
> On Sat, Jun 16, 2018 at 9:24 AM, Marco de Abreu <
> marco.g.ab...@googlemail.com.invalid> wrote:
>
> > Very good points Indu. I also think that the discussion forum is
> definitely
> > of big value and that we should keep it. But I also don't think it would
> > hurt anybody is we open up a new channel of communication, considering
> that
> > managing an email list doesn't cause any additional overhead.
> >
> > Indhu  schrieb am Sa., 16. Juni 2018, 00:37:
> >
> > > I prefer the discuss forum over email for following reasons:
> > >
> > > 1. It is easier for newcomers. People can login using Facebook, Twitter
> > or
> > > GitHub Id
> > >
> > > 2. The format is much more readable for people who search for something
> > in
> > > a search engine and land on the page.
> > >
> > > 3. Markdown support makes it easier to read code in the discussion.
> > >
> > > 4. Like button and marking a reply as answer signals the usefulness of
> an
> > > answer.
> > >
> > > That said, if a reasonable number of people like email lists better,
> I'm
> > > not against it as far as it can co-exist along with the discuss forum.
> > >
> > > Thanks,
> > > Indu
> > >
> > >
> > >
> > > On Fri, Jun 15, 2018, 11:23 PM Sergio Fernández 
> > wrote:
> > >
> > > > Thanks for your opinion, Tianqi. I still would love to listen others'
> > > > opinion on the topic to really assert anything.
> > > >
> > > > On Fri, Jun 15, 2018, 21:41 Tianqi Chen 
> > > wrote:
> > > >
> > > > > Then who should represent the users who are using the forums but
> not
> > > the
> > > > > mail-list? I personally think it is a bit abuse use of the term
> > "Apache
> > > > > way" to force our mind into the entire community... Maybe I am
> > wrong..
> > > > >
> > > > > Tianqi
> > > > >
> > > > > On Fri, Jun 15, 2018 at 9:39 PM, Sergio Fernández <
> wik...@apache.org
> > >
> > > > > wrote:
> > > > >
> > > > > > Well, I do respect what you discussed in that meetup, if course.
> > But
> > > > for
> > > > > > those who weren't there, maybe the decision taken what a bit
> bias.
> > > > > >
> > > > > > In Apache we like to say that "if it didn't happen on the mailing
> > > list
> > > > s,
> > > > > > it didn't happen" ;-)
> > > > > >
> > > > > > Look like there are different feelings about this. Should I cast
> a
> > > > VOTE?
> > > > > >
> > > > > >
> > > > > > On Fri, Jun 15, 2018, 21:27 Tianqi Chen <
> tqc...@cs.washington.edu>
> > > > > wrote:
> > > > > >
> > > > > > > I do think we are targeting all the community, but we must also
> > > agree
> > > > > > that
> > > > > > > the voice of users from the meetup is a representative sample
> of
> > > > users'
> > > > > > > demand, and it is important that we respect that.
> > > > > > >
> > > > > > > Tianqi
> > > > > > >
> > > > > > > On Fri, Jun 15, 2018 at 9:25 PM, Sergio Fernández <
> > > wik...@apache.org
> > > > >
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Are we targeting just Seattle as our community? I really hope
> > we
> > > > are
> > > > > > > > thinking a bit beyond that...
> > > > > > > >
> > > > > > > > On Fri, Jun 15, 2018, 21:22 Tianqi Chen <
> > > tqc...@cs.washington.edu>
> > > > > > > wrote:
> > > > > > > >
> > > > > > > > > I remember last time during the mxnet meetup in Seattle, we
> > > did a
> > > > > > > survey,
> > > > > > > > > and most users preferred the current discuss forum. So I
> > would
> > > > say
> > > > > we
> > > > > > > > stick
> > > > > > > > > with that given the user community prefers that
> > > > > > > > >
> > > > > > > > > Tianqi
> > > > > > > > >
> > > > > > > > > On Fri, Jun 15, 2018 at 9:13 PM, Sergio Fernández <
> > > > > wik...@apache.org
> > > > > > >
> > > > > > > > > wrote:
> > > > > > > > >
> > > > > > > > > > Then, if everybody agree, let's request the mailing list
> > > > creation
> > > > > > to
> > > > > > > > > INFRA
> > > > > > > > > > ;-)
> > > > > > > > > >
> > > > > > > > > > Marco, I wouldn't do that. Typically developers are also
> > > > > subscribed
> > > > > > > > > there,
> > > > > > > > > > since they may be the most informed people for answering
> > > users'
> > > > > > > > > questions.
> > > > > > > > > > But the topics discussed there may not be of the interest
> > for
> > > > > pure
> > > > > > > > > > development purposes. Some discussions will jump from
> > users@
> > > > to
> > > > > > dev@
> > > > > > > ,

Re: Details regarding upcoming PR for runtime TensorRT integration

2018-06-11 Thread Hagay Lupesko
+1 for reviewing a design doc.

Naveen - why do you see it sit under ONNX? Isn't it a broader topic of GPU
acceleration?

Hagay

On Mon, Jun 11, 2018, 12:56 Naveen Swamy  wrote:

> please add your proposal under design proposals, once the community has
> reviewed and there is consensus on the approach we can create a ONNX-MXNet
> sub section and move there.
>
> On Mon, Jun 11, 2018 at 9:54 PM, Naveen Swamy  wrote:
>
> > you have access now.
> >
> > On Mon, Jun 11, 2018 at 8:34 PM, Naveen Swamy 
> wrote:
> >
> >> I'll add in about an hour
> >>
> >> > On Jun 11, 2018, at 8:12 PM, Marco de Abreu <
> >> marco.g.ab...@googlemail.com> wrote:
> >> >
> >> > I don't know how to grant permission on Confluence. If somebody else
> >> knows
> >> > how to do so, please grant Marek the edit permissions.
> >> >
> >> > -Marco
> >> >
> >> >> On Mon, Jun 11, 2018 at 11:05 AM Marek Kolodziej 
> >> wrote:
> >> >>
> >> >> Hi Rajan,
> >> >>
> >> >> I wanted to share on Confluence, but it didn't allow me to create a
> new
> >> >> document. If my e-mail address gets permissions to add new Confluence
> >> >> pages, I'll transfer the contents to Confluence. Please keep me
> posted
> >> when
> >> >> I get edit permissions.
> >> >>
> >> >> Thanks!
> >> >>
> >> >> Marek
> >> >>
> >> >>
> >> >>
> >> >> On Mon, Jun 11, 2018 at 11:02 AM singh.raja...@gmail.com <
> >> >> singh.raja...@gmail.com> wrote:
> >> >>
> >> >>> HI Marek,
> >> >>>
> >> >>> Thanks for sharing the  document. It would be great if you could
> >> share it
> >> >>> on confluence wiki or a quip document. The formatting here makes it
> >> very
> >> >>> difficult to read a long document.
> >> >>>
> >> >>> Appreciate the help.
> >> >>>
> >> >>> Thanks
> >> >>> Rajan
> >> >>>
> >>  On 2018/06/11 17:50:26, Marek Kolodziej  wrote:
> >>  *Hi everyone,This is a quick summary of NVIDIA’s plans for
> >> >> open-sourcing
> >> >>> an
> >>  initial integration of TensorRT as a runtime accelerator of MxNet
> (PR
> >> >> for
> >>  discussion coming in the next few days, ETA of the first draft of
> the
> >> >> PR
> >> >>> is
> >>  this Friday or even earlier). Feedback is appreciated.Best,Marek
> >>  KolodziejNeed for runtime MxNet-TensorRT integration 1. TensorRT
> >> >> provides
> >>  significant acceleration of model inference on NVIDIA GPUs compared
> >> to
> >>  running the full graph in MxNet using unfused GPU operators. In
> >> >> addition
> >> >>> to
> >>  faster fp32 inference, TensorRT optimizes fp16 inference, and is
> >> >> capable
> >> >>> of
> >>  int8 inference (provided the quantization steps are performed).
> >> Besides
> >>  increasing throughput, TensorRT significantly reduces inference
> >> >> latency,
> >>  especially for small batches. See more here
> >>  .2. Despite its benefits,
> >> using
> >>  pre-trained models with TensorRT typically requires some effort -
> >> >> either
> >>  re-writing the model using TensorRT’s graph building APIs, or
> >> >> exporting a
> >>  model to ONNX, followed by an import step. Even if the import is
> >> >>> simplified
> >>  using ONNX, the TensorRT user still needs to provide their own data
> >>  pipeline, which used to exist in the framework, but no longer does
> >> in a
> >>  stand-alone TensorRT deployment with a client application.3.
> TensorRT
> >> >> is
> >>  very performant, but does not have the full set of MxNet’s
> operators.
> >> >>> While
> >>  that could be addressed with TensorRT plugins, it’s much simpler to
> >> >> reuse
> >>  already-exisitng MxNet operators. Also, the user shouldn’t care
> about
> >>  knowing which operators are supported by TensorRT and which ones
> >> >> aren’t -
> >>  runtime integration allows the graph partitioner to extract
> subgraphs
> >>  capable of running inside of TensorRT, place the subgraph in a
> >> TensorRT
> >>  operator in MxNet, execute that operator as part of MxNet’s graph
> >>  execusion, and handle non-TensorRT-compatible nodes as regular
> MxNet
> >>  operators remaining after the TensorRT subgraph extraction and node
> >>  substitution. The goal is to accelerate inference without changing
> >> user
> >>  experience.Design considerations 1. Since TensorRT can only
> determine
> >> >> all
> >>  possible optimizations once the tensor shapes are known, it is
> >> >> imperative
> >>  that all the shape information be provided. This means that the
> best
> >> >> time
> >>  to construct the TensorRT graph is bind time. The coming PR can
> >> >>> selectively
> >>  apply the TensorRT optimization for inference-only graphs at symbol
> >> >> bind
> >>  time. This is in fact consistent with the assumptions about
> TensorRT
> >> >> made
> >>  on the MxNet Wiki here
> >>  <
> >> >>>
> >> >> https://cwiki.apache.org/confluence/display/MXNET/Unified+
> >> integration+with+external+acceleration+libraries
> 

Re: Github link on MXNet Homepage

2018-06-07 Thread Hagay Lupesko
+1 for adding the ribbon:
- developers are used to it
- the upper bar is crowded, especially on mobile

On Thu, Jun 7, 2018, 12:35 Xie, Junyuan  wrote:

> People usually use a ribbon at the top right conner for github link. Like
> this https://pytorch.org/
>
> On 6/7/18, 12:33 PM, "singh.rajan28@" 
> wrote:
>
> I would vote for "word" , as icon might be difficult to remember or
> wil take couple of extra seconds.
>
> -Rajan
>
> On 2018/06/07 18:11:07, Aaron Markham 
> wrote:
> > What about a GitHub icon instead the word?
> >
> > Could also use a per page GitHub link to take you to the current
> page's
> > code on GitHub instead of just a link to the project home. I like how
> > docker does a similar thing to solicit feedback and contributions.
> >
> > On Thu, Jun 7, 2018, 10:18 Marco de Abreu <
> marco.g.ab...@googlemail.com>
> > wrote:
> >
> > > Yeah, why not :)
> > >
> > > -Marco
> > >
> > > On Thu, Jun 7, 2018 at 7:03 PM singh.raja...@gmail.com <
> > > singh.raja...@gmail.com> wrote:
> > >
> > > > Hi All,
> > > >
> > > > On MXNet home page, the link to github is currently inside
> > > > Community->GitHub. I think we should have link to GitHub readily
> > > available
> > > > on homepage ( one click), similar to other frameworks homepage
> out there.
> > > >
> > > > WDYT?
> > > >
> > > > Thanks
> > > > Rajan
> > > >
> > > >
> > >
> >
>
>
>
>


Re: Scala Packages in Maven

2018-05-21 Thread Hagay Lupesko
+1 for a CD for publishing to Maven
+1 for reducing the number of packages. Do we really need more than full,
infer and spark (x3 platforms)?

On Mon, May 21, 2018 at 5:47 PM, Naveen Swamy  wrote:

> not at the moment, certainly is in my radar. Apache release requires a
> committer's LDAP username/password. we could see how we can leverage CI
> setup to do this
>
> On Mon, May 21, 2018 at 5:44 PM, Marco de Abreu <
> marco.g.ab...@googlemail.com> wrote:
>
> > Great, thanks a lot. This looks great!
> >
> > Is the result of your process going to be a script we can run to generate
> > the artefacts? AFAIK, there's been attempts in the community to push
> > towards CD, thus it'd be great if this process could directly be designed
> > with an automated processing step in mind.
> >
> > -Marco
> >
> > On Tue, May 22, 2018 at 2:41 AM, Naveen Swamy 
> wrote:
> >
> > > I am not sure who published it in the past, hence this discussion.
> > >
> > > I am already in the process of documenting them here, I clean it up and
> > add
> > > more info as I make progress.
> > > 1)
> > > https://cwiki.apache.org/confluence/display/MXNET/Release+Process#
> > > ReleaseProcess-Step1.12.CreateScalaMavenPackages(WIP)
> > >
> > > 2) https://cwiki.apache.org/confluence/display/MXNET/MXNet-Scala
> > >
> > > On Mon, May 21, 2018 at 5:37 PM, Marco de Abreu <
> > > marco.g.ab...@googlemail.com> wrote:
> > >
> > > > Agree, we should make a proper assessment of what all these packages
> > are
> > > > before we try managing them. Do you know who published them before?
> > > >
> > > > Moving forward, it'd be great if you could document the entire
> process
> > > (and
> > > > all issues you encounter) in confluence. This would allow us to
> > re-visit
> > > > decisions later on and give us a source of information for questions
> > > > exactly like this one. If we decide to add a new package to the
> publish
> > > > process, we could just document it there and have a central point to
> > look
> > > > it up.
> > > >
> > > > -Marco
> > > >
> > > > On Tue, May 22, 2018 at 1:34 AM, Naveen Swamy 
> > > wrote:
> > > >
> > > > > I think this needs quite a bit of rework to clean up, currently I
> am
> > > > > thinking to publish only the mxnet-full_2.11-{platform} x 3 and
> > revisit
> > > > > what the other packages should be published by the next release
> > > > >
> > > > > On Mon, May 21, 2018 at 3:49 PM, Naveen Swamy 
> > > > wrote:
> > > > >
> > > > > > I am working on publishing MXNet Scala packages of the 1.2
> release
> > to
> > > > > > maven and observed that there are about 20 packages that needs to
> > be
> > > > > > published. I think this is too many of them and probably will
> > confuse
> > > > the
> > > > > > users.
> > > > > >
> > > > > > I think we can cut down the number of packages, I wanted ask if
> > > someone
> > > > > > knows why the below packages were published and if it is ok not
> > > publish
> > > > > > them going forward.
> > > > > >
> > > > > > Proposing to not publish
> > > > > > ---
> > > > > >
> > > > > > mxnet-full-parent_2.11
> > > > > > mxnet-parent_2.11
> > > > > > mxnet-scala-native-parent
> > > > > > Any other packages that you propose to remove?
> > > > > > ---
> > > > > >
> > > > > > ---Full List --
> > > > > > libmxnet-init-scala-{platform} x 2
> > > > > > libmxnet-scala-{platform} x 3
> > > > > > mxnet-core_2.11
> > > > > > mxnet-examples_2.11
> > > > > > mxnet-full-parent_2.11
> > > > > > mxnet-full_2.11-{platform} x 3
> > > > > > mxnet-infer_2.11
> > > > > > mxnet-init_2.11
> > > > > > mxnet-macros_2.11
> > > > > > mxnet-parent_2.11
> > > > > > mxnet-scala-init-native-parent
> > > > > > mxnet-scala-native-parent
> > > > > > mxnet-spark_2.11
> > > > > > ---
> > > > > >
> > > > > > Please let me know your thoughts?
> > > > > >
> > > > > > Thanks, Naveen
> > > > > >
> > > > >
> > > >
> > >
> >
>


Re: Parallel Inference Proposal

2018-05-10 Thread Hagay Lupesko
Good suggestion Kellen!

I like the idea, it will solve an existing deficiency in MXNet, that has
been worked around so far. As an example, the recently added Scala
inference API (part of 1.2RC) implemented a dispatcher in Scala to
workaround that limitation.

Would be great to better understand the changes you are planning in finer
details though.

Hagay

On Thu, May 10, 2018 at 7:42 AM, kellen sunderland <
kellen.sunderl...@gmail.com> wrote:

> Hello MXNet developers,
>
>
>
> I’ve recently been speaking with users who’d like to run parallel inference
> requests with MXNet on their service.  They’ll do this on GPUs, and due to
> resource constraints, they’d like to do this without duplicating their
> model’s weights in memory.  They’d also like run inference with a low
> degree of buffering/batching as latency is important.  I’ve created a wiki
> page with a small proposal that I hope will make running parallel inference
> a little easier.  I’d like to discuss the proposal in this thread and would
> particularly appreciate it if core devs could correct me if I’ve made any
> incorrect assumptions in the doc.
>
>
> Proposal here:
> https://cwiki.apache.org/confluence/display/MXNET/
> Parallel+Inference+in+MXNet
>
>
>
> If people are OK with the proposal I can open a Jira ticket, PR, etc.  If
> people are curious about perf implications I can also do some benchmarking.
>
>
>
> Thanks in advance for the feedback,
>
> -Kellen
>


Re: Apache MXNet build failures are mostly valid - verify before merge

2017-08-31 Thread Hagay Lupesko
Build stability is a major issue, builds have been failing left and right
over the last week. Some of it is due to Jenkins slave issues, but some are
real regressions.
We need to be more strict in the code we're committing.

I propose we configure our master to be a protected branch (
https://help.github.com/articles/about-protected-branches/).

Thoughts?

On 2017-08-28 22:41, sandeep krishnamurthy  wrote:
> Hello Committers and Contributors,>
>
> Due to unstable build pipelines, from past 1 week, PRs are being merged>
> after CR ignoring PR build status. Build pipeline is much more stable
than>
> last week and most of the build failures you see from now on, are likely
to>
> be a valid failure and hence, it is recommended to wait for PR builds,
see>
> the root cause of any build failures before proceeding with merges.>
>
> At this point of time, there are 2 intermittent issue yet to be fixed ->
> * Network error leading to GitHub requests throwing 404>
> * A conflict in artifacts generated between branches/PR - Cause unknown
yet.>
> These issues will be fixed soon.>
>
>
> -- >
> Sandeep Krishnamurthy>
>