Hi Kellen,

Thank you very much for your recognition for our works :) 

This is a great joint work from the community (Wu Jun, Zheng Da, etc.) and 
Intel team.

We are continuously improving the quantization flow now and more amazing 
features will be ready soon.

Thanks,

--Patric

> -----Original Message-----
> From: kellen sunderland [mailto:kellen.sunderl...@gmail.com]
> Sent: Thursday, November 22, 2018 9:07 AM
> To: dev@mxnet.incubator.apache.org
> Cc: d...@mxnet.apache.org
> Subject: Re: Include MKLDNN into default mxnet pip package
> 
> I've spent the last few days testing MXNet w/ MKLDNN and quantized models
> and it's a beast.  Really good speed improvements on my models, no bugs
> that I've noticed.
> 
> I'm in general supportive but I'm still wondering what the story is like when
> there's no AVX instructions present on CPUs.  Do we get an illegal instruction
> error, or does it fallback gracefully?  So far it sounds like it works on a
> Threadripper and Xen AMD CPU.  I can try on a Ryzen.  What about older
> Intel or AMD CPUs?
> 
> On Wed, Nov 21, 2018 at 4:55 PM Zai, Alexander
> <alex...@amazon.com.invalid>
> wrote:
> 
> > AMD benchmarks have been published. We are seeing a x15.8 speedup
> with
> > Resnet50 (batch size 32) on AWS's new m5a.24xlarge machine. With a
> > smaller network (Mobilenet - batch size 32) the speedup is more
> > significant at x38.7. Let's have a vote to see if the PR to have
> > MKLDNN enabled by default
> > (https://github.com/apache/incubator-mxnet/pull/12591) can be merged
> > before 1.4.0 release.
> >
> > On 10/19/18, 9:17 AM, "Pedro Larroy" <pedro.larroy.li...@gmail.com>
> > wrote:
> >
> >     I did  pip install mxnet-mkl==1.3.1b20181018 on an AMD Ryzen 1950X
> > and unit
> >     tests are passing.
> >
> >     Is this build using AVX512?  in /proc/cpuinfo I see only "avx" flag.
> >     There's no "avx2" like on recent intel cpus.
> >
> >     Pedro.
> >
> >     On Fri, Oct 19, 2018 at 5:12 PM Hagay Lupesko <lupe...@gmail.com>
> > wrote:
> >
> >     > Awesome collaborative effort across many contributors and companies!
> >     >
> >     > The boost is impressive and for MXNet users to get this boost
> > "out of the
> >     > box" is a great benefit and makes MXNet an even better choice.
> >     >
> >     > Alex - can you clarify whether there are any down sides with
> > regards to
> >     > noon AVX-512 architectures, AMD CPUs, etc? Will it gracefully
> > fallback?
> >     >
> >     > Hagay
> >     >
> >     >
> >     > On Fri, Oct 19, 2018, 15:46 Sergio Fernández <wik...@apache.org>
> > wrote:
> >     >
> >     > > If there is no downside on platforms not supporting AVX512
> > instructions,
> >     > > then +1
> >     > >
> >     > >
> >     > > On Wed, Oct 17, 2018, 14:10 Alex Zai <aza...@gmail.com> wrote:
> >     > >
> >     > > > Hey all,
> >     > > > We have been working hard these past few months to integrate and
> >     > > stabilize
> >     > > > Intel’s MKLDNN deep learning CPU accelerator into Mxnet and
> > have made
> >     > > > incredible progress. On CPUs with AVX512 instructions (such
> > as
> > c5.18x)
> >     > we
> >     > > > have seen performance increase up to 12x and on other
> > platforms (Macs,
> >     > > > AVX2) we seen a speedup of 1.5+. Full list of benchmarks can
> > be found
> >     > > here
> >     > > > (
> >     > > >
> >     > >
> >     >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=956507
> 64
> >     > > >  and https://github.com/apache/incubator-mxnet/pull/12591).
> >     > > >
> >     > > > Currently, using this accelerator requires the developer to
> > either pip
> >     > > > install the mxnet-mkl version of mxnet or to build it
> > themselves from
> >     > > > source. Given that we should try to provide the best
> > performance "out
> >     > of
> >     > > > the box” with mxnet we should include this in the default build.
> > The
> >     > > mkldnn
> >     > > > library is included with in the pip package build so it does not
> >     > require
> >     > > an
> >     > > > external dependency.
> >     > > >
> >     > > > There were concerns that MKLDNN could cause regressions on
> > certain
> >     > > > platforms (as it did with the tensorflow version a while
> > back); but we
> >     > > > added a env flag (MXNET_MKLDNN_ENABLED) that allows users to
> > turn of
> >     > this
> >     > > > feature during runtime. Please bring up any other concerns
> > you may have
> >     > > and
> >     > > > your thoughts on including this accelerator in the default build.
> >     > > >
> >     > > > Best,
> >     > > > Alex
> >     > > >
> >     > >
> >     >
> >
> >
> >

Reply via email to