Re: Remove MKLML as dependency

2018-09-19 Thread Chris Olivier
maybe I missed it, but what does MKLML have that mkldnn doesn’t have that
makes it necessary?

what’s the motivation for removing it?

On Tue, Sep 18, 2018 at 11:31 PM Lv, Tao A  wrote:

> If you just want to test the performance, I think you need link MKL for
> BLAS and MKL-DNN for NN. Also MKL-DNN should link MKL for better
> performance.
>
> Here are some ways for you to install full MKL library if you don't have
> one:
> 1. Register and download from intel website:
> https://software.intel.com/en-us/mkl
> 2. Apt-get/yum: currently it need configure Intel’s repositories.
> a.
> https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-yum-repo
> b. https://software.intel.com/en-us/articles/
> thatinstalling-intel-free-libs-and-python-apt-repo
> 
> 3. pip install mkl / mkl-devel: ‘mkl’ package has the runtime and
> ‘mkl-devel’ includes everything with the headers
> a.
> https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and
> 4. conda install: also has mkl and mkl-devel
> a. https://anaconda.org/intel/mkl
> b. https://anaconda.org/intel/mkl-devel
>
> If you want to redistribute MKL with MXNet, you may need take care of the
> license issue. Currently, MKL is using ISSL (
> https://software.intel.com/en-us/license/intel-simplified-software-license
> ).
>
> -Original Message-
> From: Zai, Alexander [mailto:alex...@amazon.com.INVALID]
> Sent: Wednesday, September 19, 2018 12:49 PM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Remove MKLML as dependency
>
> Will test it out tomorrow.
>
> On the side, what is the best way to test MKL build for MXnet. MKL is
> licensed?
>
> Best,
> Alex
>
> On 9/18/18, 7:50 PM, "Lv, Tao A"  wrote:
>
> Hi Alex,
>
> Thanks for bringing this up.
>
> The original intention of MKLML is to provide a light and
> easy-to-access library for ML/DL community. It's released with MKL-DNN
> under Apache-2.0 license.
>
> AFAIK, MKL-DNN still relies on it for better performance. So I'm
> afraid there will be a performance regression in MKL pip packages if MKLML
> is simply removed.
>
> Have you ever tried the build without MKLML and how does the
> performance look like?
>
> -tao
>
> -Original Message-
> From: Alex Zai [mailto:aza...@gmail.com]
> Sent: Wednesday, September 19, 2018 4:49 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Remove MKLML as dependency
>
> On our build from source page we have a list of blas libraries that
> are recommended:
> https://mxnet.incubator.apache.org/install/build_from_source.html
>
> MKL-DNN
> MKL
> MKLML
> Apple Accelerate
> OpenBlas
>
> MKLML is a subset of MKL (https://github.com/intel/mkl-dnn/issues/102)
> and therefore MKLML users can just use MKL instead. Does anyone see an
> issue with me removing this? It would simplify out doc page and build file.
>
> Alex
>
>
>


reject

2018-09-19 Thread Private LIst Moderation
I'm afraid the downloads page still does not meet requirements.

1. The artifact must link to a mirror site, e.g. the dyn/closer page (not to 
github)

2. The checksum and signature must link directly to the apache.org/dist site 
(not to github or a mirror)

Please update the downloads page and let us know when you've done so.

Regards,

Craig

> Begin forwarded message:
> 
> From: announce-reject-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> Subject: MODERATE for annou...@apache.org
> Date: September 19, 2018 at 2:30:57 PM PDT
> To: Recipient list not shown: ;
> Cc: 
> announce-allow-tc.1537392657.afleppkblokjjcklocac-zhasheng=apache@apache.org
> Reply-To: announce-accept-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> 
> 
> To approve:
>   announce-accept-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> To reject:
>   announce-reject-1537392657.65969.akeecpnfdegkalmpn...@apache.org
> To give a reason to reject:
> %%% Start comment
> %%% End comment
> 
> 
> From: Sheng Zha 
> Subject: [ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release
> Date: September 19, 2018 at 2:31:18 PM PDT
> To: annou...@apache.org
> 
> 
> Hello all,
> 
> The Apache MXNet (incubating) Community announces the availability of
> Apache MXNet (incubating) 1.3.0!
> 
> Release blog post:
> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
>  
> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad 
> 
> 
> Apache MXNet (incubating) is a deep learning framework designed for
> both efficiency and flexibility. It allows you to mix symbolic and
> imperative programming to maximize efficiency and productivity.
> 
> This release improves usability, performance, and interoperability.
> 
> A full list of the changes in this release can be found in the release
> notes:
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
>  
> 
> 
> A Link to the Download is here:
> http://mxnet.incubator.apache.org/install/download.html 
> 
> 
> If you prefer to build from source and experiment with various
> compile-time configuration options, use this link to get the
> instructions:
> http://mxnet.incubator.apache.org/install/index.html 
> 
> 
> Or You can download and play with MXNet easily using one of the options
> below:
>1. The Pip packages can be found here: https://pypi.python.org/pypi/mxnet 
> 
>2. The Docker Images can be found here:
> https://hub.docker.com/r/mxnet/python/ 
> 
> 
> Links in Maven to the published Scala packages:
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/ 
> 
> https://repository.apache.org/#nexus-search;quick~org.apache.mxnet 
> 
> 
> and to the experimental Clojure packages:
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
>  
> 
> 
> The release tag used for the 1.3.0 release is:
> https://github.com/apache/incubator-mxnet/tree/1.3.0 
> 
> 
> Some more MXNet Resources:
>1. Issues: https://github.com/apache/incubator-mxnet/issues 
> 
>2. Wiki: https://cwiki.apache.org/confluence/display/MXNET 
> 
> 
> 
> If you want to learn more about MXNet visit
> http://mxnet.incubator.apache.org/ 
> 
> Finally, you are welcome to join and also invite your friends to the
> dynamic and growing MXNet community by subscribing to
> dev@mxnet.incubator.apache.org 
> 
> 
> Acknowledgments:
> We would like to thank everyone who contributed to the 1.3.0 release:
> 
> Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander Alexandrov, 
> Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh Subramanian, 
> Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan, Asmus Hetzel, 
> Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan, Burness Duan, 
> Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai, Chance Bair, 
> chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang Trung Kien, Deokjae 
> Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict, Felix Hieber, 
> Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin 

Podling Report Reminder - October 2018

2018-09-19 Thread jmclean
Dear podling,

This email was sent by an automated system on behalf of the Apache
Incubator PMC. It is an initial reminder to give you plenty of time to
prepare your quarterly board report.

The board meeting is scheduled for Wed, 17 October 2018, 10:30 am PDT.
The report for your podling will form a part of the Incubator PMC
report. The Incubator PMC requires your report to be submitted 2 weeks
before the board meeting, to allow sufficient time for review and
submission (Wed, October 03).

Please submit your report with sufficient time to allow the Incubator
PMC, and subsequently board members to review and digest. Again, the
very latest you should submit your report is 2 weeks prior to the board
meeting.

Candidate names should not be made public before people are actually
elected, so please do not include the names of potential committers or
PPMC members in your report.

Thanks,

The Apache Incubator PMC

Submitting your Report

--

Your report should contain the following:

*   Your project name
*   A brief description of your project, which assumes no knowledge of
the project or necessarily of its field
*   A list of the three most important issues to address in the move
towards graduation.
*   Any issues that the Incubator PMC or ASF Board might wish/need to be
aware of
*   How has the community developed since the last report
*   How has the project developed since the last report.
*   How does the podling rate their own maturity.

This should be appended to the Incubator Wiki page at:

https://wiki.apache.org/incubator/October2018

Note: This is manually populated. You may need to wait a little before
this page is created from a template.

Mentors
---

Mentors should review reports for their project(s) and sign them off on
the Incubator wiki page. Signing off reports shows that you are
following the project - projects that are not signed may raise alarms
for the Incubator PMC.

Incubator PMC


Re: Some feedback from MXNet Zhihu topic

2018-09-19 Thread Aaron Markham
Thanks for this translation and feedback Qing!
I've addressed point 3 of the documentation feedback with this PR:
https://github.com/apache/incubator-mxnet/pull/12604
I'm not sure how to take the first two points without some explicit URLs
and examples, so if anyone has those I'd be happy to take a look if there's
some glitch vs missing or wrong docs.

Also, I would agree that there should be some more simple examples. Often
times the examples are too complicated and unclear about what is important
or not. The audience targeting is for deep learning practitioners, not
"newbies".

And on a related note, I'd really like to pull the Gluon stuff into the API
section. It's confusing as its own navigation item and orphaned
information. It could have a navigation entry at the top of the API list
like "Python: Gluon" or just "Gluon" then list "Python: Module" or just
"Python". Or running this the other way, the Gluon menu could have API and
Tutorials and be more fleshed out, though this is not my preference. Either
way, it needs some attention.

Cheers,
Aaron

On Wed, Sep 19, 2018 at 11:04 AM Qing Lan  wrote:

> Hi all,
>
> There was a trend topic in
> Zhihu (a famous Chinese Stackoverflow+Quora) asking about the status of
> MXNet in 2018 recently. Mu replied the thread and obtained more than 300+
> `like`.
> However there are a few concerns addressed in the comments of this thread,
> I have done some simple translation from Chinese to English:
>
> 1. Documentations! Until now, the online doc still contains:
> 1. Depreciated but not updated doc
> 2. Wrong documentation with poor description
> 3. Document in Alpha stage such as you must install `pip
> –pre` in order to run.
>
> 2. Examples! For Gluon specifically, many examples are still mixing
> Gluon/MXNet apis. The mixure of mx.sym, mx.nd mx.gluon confused the users
> of what is the right one to choose in order to get their model to work. As
> an example, Although Gluon made data encapsulation possible, still there
> are examples using mxn.io.ImageRecordIter with tens of params (feels like
> gluon examples are simply the copy from old Python examples).
>
> 3. Examples again! Comparing to PyTorch, there are a few examples I don't
> like in Gluon:
> 1. Available to run however the code structure is still
> very complicated. Such as example/image-classification/cifar10.py. It
> seemed like a consecutive code concatenation. In fact, these are just a
> series of layers mixed with model.fit. It makes user very hard to
> modify/extend the model.
> 2. Only available to run with certain settings. If users
> try to change a little bit in the model, crashes will happen. For example,
> the multi-gpu example in Gluon website, MXNet hide the logic that using
> batch size to change learning rate in a optimizer. A lot of newbies didn't
> know this fact and they would only find that the model stopped converging
> when batch size changed.
> 3. The worst scenario is the model itself just simply
> didn't work. Maintainers in the MXNet community didn't run the model (even
> no integration test) and merge the code directly. It makes the script not
> able run till somebody raise the issues and fix it.
>
> 4. The Community problem. The core advantage for MXNet is it's scalability
> and efficiency. However, the documentation of some tools are confusing.
> Here are two examples:
>
> 1. im2rec contains 2 versions, C++ (binary) and python.
> But nobody would thought that the argparse in these tools are different (in
> the meantime, there is no appropriate examples to compare with, users could
> only use them by guessing the usage).
>
> 2. How to combine MXNet distributed platform with
> supercomputing tool such as Slurm? How do we do profiling and how to debug.
> A couples of companies I knew thought of using MXNet for distributed
> training. Due to lack of examples and poor support from the community, they
> have to change their models into TensorFlow and Horovod.
>
> 5. The heavy code base. Most of the MXNet examples/source
> code/documentation/language binding are in a single repo. A git clone
> operation will cost tens of Mb. The New feature PR would takes longer time
> than expected. The poor reviewing response / rules keeps new contributors
> away from the community. I remember there was a call for
> document-improvement last year. The total timeline cost a user 3 months of
> time to merge into master. It almost equals to a release interval of
> Pytorch.
>
> 6. To Developers. There are very few people in the community discussed the
> improvement we can take to make MXNet more user-friendly. It's been so easy
> to trigger tens of stack issues during coding. Again, is that a requirement
> for MXNet users to be familiar with C++? The connection between Python and
> C lacks a IDE lint (maybe MXNet assume every 

[ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-19 Thread Sheng Zha
Hello all,

The Apache MXNet (incubating) Community announces the availability of
Apache MXNet (incubating) 1.3.0!

Release blog post:
https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad

Apache MXNet (incubating) is a deep learning framework designed for
both efficiency and flexibility. It allows you to mix symbolic and
imperative programming to maximize efficiency and productivity.

This release improves usability, performance, and interoperability.

A full list of the changes in this release can be found in the release
notes:
https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes

A Link to the Download is here:
*http://mxnet.incubator.apache.org/install/download.html
*

If you prefer to build from source and experiment with various
compile-time configuration options, use this link to get the
instructions:
http://mxnet.incubator.apache.org/install/index.html

Or You can download and play with MXNet easily using one of the options
below:
   1. The Pip packages can be found here: https://pypi.python.org/pypi/mxnet
   2. The Docker Images can be found here:
https://hub.docker.com/r/mxnet/python/

Links in Maven to the published Scala packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
https://repository.apache.org/#nexus-search;quick~org.apache.mxnet

and to the experimental Clojure packages:
https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/

The release tag used for the 1.3.0 release is:
https://github.com/apache/incubator-mxnet/tree/1.3.0

Some more MXNet Resources:
   1. Issues: https://github.com/apache/incubator-mxnet/issues
   2. Wiki: https://cwiki.apache.org/confluence/display/MXNET


If you want to learn more about MXNet visit
http://mxnet.incubator.apache.org/

Finally, you are welcome to join and also invite your friends to the
dynamic and growing MXNet community by subscribing to
dev@mxnet.incubator.apache.org


Acknowledgments:
We would like to thank everyone who contributed to the 1.3.0 release:

Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander Alexandrov,
Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang Trung
Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict, Felix
Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov, Sheng
Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
Adeshina, solin319, Soonhwan-Kwon, starimpact, Steffen Rochel, Taliesin
Beynon, Tao Lv, Thom Lane, Thomas Delteil, Tianqi Chen, Todd Sundsted, Tong
He, Vandana Kannan, vdantu, Vishaal Kapoor, wangzhe, xcgoner, Wei Wu,
Wen-Yang Chu, Xingjian Shi, Xinyu Chen, yifeim, Yizhi Liu, YouRancestor,
Yuelin Zhang, Yu-Xiang Wang, Yuan Tang, Yuntao Chen, Zach Kimberg, Zhennan
Qin, Zhi Zhang, zhiyuan-huang, Ziyue Huang, Ziyi Mu, Zhuo Zhang.

… and thanks to all of the Apache MXNet community supporters, spreading
knowledge and helping to grow the community!


Thanks!
Apache MXNet (incubating) Team
___

DISCLAIMER:
Apache MXNet (incubating) is an effort undergoing incubation at The
Apache Software Foundation (ASF), sponsored by the name of Apache
Incubator PMC. Incubation is required of all newly accepted
projects until a further review indicates that the
infrastructure, communications, and decision-making process have
stabilized in a manner consistent with other successful ASF
projects. While incubation status is not necessarily a reflection
of the completeness or stability of the code, it does indicate
that the project has yet to be fully endorsed by the ASF.


Re: [ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-19 Thread Sheng Zha
Thanks, Sergio. Yes, I'm on it. It was due to the download link not
conforming to the requirement. I will fix and resend.

-sz

On Wed, Sep 19, 2018 at 12:07 PM Sergio Fernández  wrote:

> Zha, you should check you have permissions to post to annou...@apache.org,
> because I don't think you announcement made it through:
> https://lists.apache.org/list.html?annou...@apache.org:lte=1M:mxnet
>
> [image: Screen Shot 2018-09-19 at 12.05.14 PM.png]
>
> On Mon, Sep 17, 2018 at 3:51 PM Sheng Zha  wrote:
>
>> Hello all,
>>
>> The Apache MXNet (incubating) Community announces the availability of
>> Apache MXNet (incubating) 1.3.0!
>>
>> Release blog post:
>> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
>> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad
>>
>> Apache MXNet (incubating) is a deep learning framework designed for
>> both efficiency and flexibility. It allows you to mix symbolic and
>> imperative programming to maximize efficiency and productivity.
>>
>> This release improves usability, performance, and interoperability.
>>
>> A full list of the changes in this release can be found in the release
>> notes:
>>
>> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
>>
>> A Link to the Download is here:
>> https://www.apache.org/dyn/closer.cgi/incubator/mxnet/1.3.0
>>
>> If you prefer to build from source and experiment with various
>> compile-time configuration options, use this link to get the
>> instructions:
>> http://mxnet.incubator.apache.org/install/index.html
>>
>> Or You can download and play with MXNet easily using one of the options
>> below:
>>1. The Pip packages can be found here:
>> https://pypi.python.org/pypi/mxnet
>>2. The Docker Images can be found here:
>> https://hub.docker.com/r/mxnet/python/
>>
>> Links in Maven to the published Scala packages:
>>
>> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
>> https://repository.apache.org/#nexus-search;quick~org.apache.mxnet
>>
>> and to the experimental Clojure packages:
>>
>> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
>>
>> The release tag used for the 1.3.0 release is:
>> https://github.com/apache/incubator-mxnet/tree/1.3.0
>>
>> Some more MXNet Resources:
>>1. Issues: https://github.com/apache/incubator-mxnet/issues
>>2. Wiki: https://cwiki.apache.org/confluence/display/MXNET
>>
>>
>> If you want to learn more about MXNet visit
>> http://mxnet.incubator.apache.org/
>>
>> Finally, you are welcome to join and also invite your friends to the
>> dynamic and growing MXNet community by subscribing to
>> dev@mxnet.incubator.apache.org
>>
>>
>> Acknowledgments:
>> We would like to thank everyone who contributed to the 1.3.0 release:
>>
>> Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander
>> Alexandrov,
>> Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
>> Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
>> Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
>> Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
>> Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang
>> Trung
>> Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict,
>> Felix
>> Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
>> Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
>> Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
>> jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
>> Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
>> Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
>> Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
>> Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
>> Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
>> perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
>> Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
>> Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
>> Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov,
>> Sheng
>> Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
>> Adeshina, solin319, Soonhwan-Kwon, starimpact, Steffen Rochel, Taliesin
>> Beynon, Tao Lv, Thom Lane, Thomas Delteil, Tianqi Chen, Todd Sundsted,
>> Tong
>> He, Vandana Kannan, vdantu, Vishaal Kapoor, wangzhe, xcgoner, Wei Wu,
>> Wen-Yang Chu, Xingjian Shi, Xinyu Chen, yifeim, Yizhi Liu, YouRancestor,
>> Yuelin Zhang, Yu-Xiang Wang, Yuan Tang, Yuntao Chen, Zach Kimberg, Zhennan
>> Qin, Zhi Zhang, zhiyuan-huang, Ziyue Huang, Ziyi Mu, Zhuo Zhang.
>>
>> … and thanks to all of the Apache MXNet community supporters, spreading
>> knowledge and helping to 

Re: [ANNOUNCE] Apache MXNet (incubating) 1.3.0 Release

2018-09-19 Thread Sergio Fernández
Zha, you should check you have permissions to post to annou...@apache.org,
because I don't think you announcement made it through:
https://lists.apache.org/list.html?annou...@apache.org:lte=1M:mxnet

[image: Screen Shot 2018-09-19 at 12.05.14 PM.png]

On Mon, Sep 17, 2018 at 3:51 PM Sheng Zha  wrote:

> Hello all,
>
> The Apache MXNet (incubating) Community announces the availability of
> Apache MXNet (incubating) 1.3.0!
>
> Release blog post:
> https://blogs.apache.org/mxnet/entry/announcing-apache-mxnet-incubating-1
> https://medium.com/apache-mxnet/announcing-apache-mxnet-1-3-0-484ea78c22ad
>
> Apache MXNet (incubating) is a deep learning framework designed for
> both efficiency and flexibility. It allows you to mix symbolic and
> imperative programming to maximize efficiency and productivity.
>
> This release improves usability, performance, and interoperability.
>
> A full list of the changes in this release can be found in the release
> notes:
>
> https://cwiki.apache.org/confluence/display/MXNET/Apache+MXNet+%28incubating%29+1.3.0+Release+Notes
>
> A Link to the Download is here:
> https://www.apache.org/dyn/closer.cgi/incubator/mxnet/1.3.0
>
> If you prefer to build from source and experiment with various
> compile-time configuration options, use this link to get the
> instructions:
> http://mxnet.incubator.apache.org/install/index.html
>
> Or You can download and play with MXNet easily using one of the options
> below:
>1. The Pip packages can be found here:
> https://pypi.python.org/pypi/mxnet
>2. The Docker Images can be found here:
> https://hub.docker.com/r/mxnet/python/
>
> Links in Maven to the published Scala packages:
>
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/
> https://repository.apache.org/#nexus-search;quick~org.apache.mxnet
>
> and to the experimental Clojure packages:
>
> https://repository.apache.org/content/repositories/releases/org/apache/mxnet/contrib/clojure/
>
> The release tag used for the 1.3.0 release is:
> https://github.com/apache/incubator-mxnet/tree/1.3.0
>
> Some more MXNet Resources:
>1. Issues: https://github.com/apache/incubator-mxnet/issues
>2. Wiki: https://cwiki.apache.org/confluence/display/MXNET
>
>
> If you want to learn more about MXNet visit
> http://mxnet.incubator.apache.org/
>
> Finally, you are welcome to join and also invite your friends to the
> dynamic and growing MXNet community by subscribing to
> dev@mxnet.incubator.apache.org
>
>
> Acknowledgments:
> We would like to thank everyone who contributed to the 1.3.0 release:
>
> Aaron Markham, Abhinav Sharma, access2rohit, Alex Li, Alexander Alexandrov,
> Alexander Zai, Amol Lele, Andrew Ayres, Anirudh Acharya, Anirudh
> Subramanian, Ankit Khedia, Anton Chernov, aplikaplik, Arunkumar V Ramanan,
> Asmus Hetzel, Aston Zhang, bl0, Ben Kamphaus, brli, Burin Choomnuan,
> Burness Duan, Caenorst, Cliff Woolley, Carin Meier, cclauss, Carl Tsai,
> Chance Bair, chinakook, Chudong Tian, ciyong, ctcyang, Da Zheng, Dang Trung
> Kien, Deokjae Lee, Dick Carter, Didier A., Eric Junyuan Xie, Faldict, Felix
> Hieber, Francisco Facioni, Frank Liu, Gnanesh, Hagay Lupesko, Haibin Lin,
> Hang Zhang, Hao Jin, Hao Li, Haozhi Qi, hasanmua, Hu Shiwen, Huilin Qu,
> Indhu Bharathi, Istvan Fehervari, JackieWu, Jake Lee, James MacGlashan,
> jeremiedb, Jerry Zhang, Jian Guo, Jin Huang, jimdunn, Jingbei Li, Jun Wu,
> Kalyanee Chendke, Kellen Sunderland, Kovas Boguta, kpmurali, Kurman
> Karabukaev, Lai Wei, Leonard Lausen, luobao-intel, Junru Shao, Lianmin
> Zheng, Lin Yuan, lufenamazon, Marco de Abreu, Marek Kolodziej, Manu Seth,
> Matthew Brookhart, Milan Desai, Mingkun Huang, miteshyh, Mu Li, Nan Zhu,
> Naveen Swamy, Nehal J Wani, PatricZhao, Paul Stadig, Pedro Larroy,
> perdasilva, Philip Hyunsu Cho, Pishen Tsai, Piyush Ghai, Pracheer Gupta,
> Przemyslaw Tredak, Qiang Kou, Qing Lan, qiuhan, Rahul Huilgol, Rakesh
> Vasudevan, Ray Zhang, Robert Stone, Roshani Nagmote, Sam Skalicky, Sandeep
> Krishnamurthy, Sebastian Bodenstein, Sergey Kolychev, Sergey Sokolov, Sheng
> Zha, Shen Zhu, Sheng-Ying, Shuai Zheng, slitsey, Simon, Sina Afrooze, Soji
> Adeshina, solin319, Soonhwan-Kwon, starimpact, Steffen Rochel, Taliesin
> Beynon, Tao Lv, Thom Lane, Thomas Delteil, Tianqi Chen, Todd Sundsted, Tong
> He, Vandana Kannan, vdantu, Vishaal Kapoor, wangzhe, xcgoner, Wei Wu,
> Wen-Yang Chu, Xingjian Shi, Xinyu Chen, yifeim, Yizhi Liu, YouRancestor,
> Yuelin Zhang, Yu-Xiang Wang, Yuan Tang, Yuntao Chen, Zach Kimberg, Zhennan
> Qin, Zhi Zhang, zhiyuan-huang, Ziyue Huang, Ziyi Mu, Zhuo Zhang.
>
> … and thanks to all of the Apache MXNet community supporters, spreading
> knowledge and helping to grow the community!
>
>
> Thanks!
> Apache MXNet (incubating) Team
> ___
>
> DISCLAIMER:
> Apache MXNet (incubating) is an effort undergoing incubation at The
> Apache Software Foundation (ASF), sponsored by the name of Apache
> Incubator PMC. Incubation is required of all newly accepted
> projects until 

Some feedback from MXNet Zhihu topic

2018-09-19 Thread Qing Lan
Hi all,

There was a trend topic in Zhihu (a 
famous Chinese Stackoverflow+Quora) asking about the status of MXNet in 2018 
recently. Mu replied the thread and obtained more than 300+ `like`.
However there are a few concerns addressed in the comments of this thread, I 
have done some simple translation from Chinese to English:

1. Documentations! Until now, the online doc still contains:
1. Depreciated but not updated doc
2. Wrong documentation with poor description
3. Document in Alpha stage such as you must install `pip –pre` 
in order to run.

2. Examples! For Gluon specifically, many examples are still mixing Gluon/MXNet 
apis. The mixure of mx.sym, mx.nd mx.gluon confused the users of what is the 
right one to choose in order to get their model to work. As an example, 
Although Gluon made data encapsulation possible, still there are examples using 
mxn.io.ImageRecordIter with tens of params (feels like gluon examples are 
simply the copy from old Python examples).

3. Examples again! Comparing to PyTorch, there are a few examples I don't like 
in Gluon:
1. Available to run however the code structure is still very 
complicated. Such as example/image-classification/cifar10.py. It seemed like a 
consecutive code concatenation. In fact, these are just a series of layers 
mixed with model.fit. It makes user very hard to modify/extend the model.
2. Only available to run with certain settings. If users try to 
change a little bit in the model, crashes will happen. For example, the 
multi-gpu example in Gluon website, MXNet hide the logic that using batch size 
to change learning rate in a optimizer. A lot of newbies didn't know this fact 
and they would only find that the model stopped converging when batch size 
changed.
3. The worst scenario is the model itself just simply didn't 
work. Maintainers in the MXNet community didn't run the model (even no 
integration test) and merge the code directly. It makes the script not able run 
till somebody raise the issues and fix it.

4. The Community problem. The core advantage for MXNet is it's scalability and 
efficiency. However, the documentation of some tools are confusing. Here are 
two examples:

1. im2rec contains 2 versions, C++ (binary) and python. But 
nobody would thought that the argparse in these tools are different (in the 
meantime, there is no appropriate examples to compare with, users could only 
use them by guessing the usage).

2. How to combine MXNet distributed platform with 
supercomputing tool such as Slurm? How do we do profiling and how to debug. A 
couples of companies I knew thought of using MXNet for distributed training. 
Due to lack of examples and poor support from the community, they have to 
change their models into TensorFlow and Horovod.

5. The heavy code base. Most of the MXNet examples/source 
code/documentation/language binding are in a single repo. A git clone operation 
will cost tens of Mb. The New feature PR would takes longer time than expected. 
The poor reviewing response / rules keeps new contributors away from the 
community. I remember there was a call for document-improvement last year. The 
total timeline cost a user 3 months of time to merge into master. It almost 
equals to a release interval of Pytorch.

6. To Developers. There are very few people in the community discussed the 
improvement we can take to make MXNet more user-friendly. It's been so easy to 
trigger tens of stack issues during coding. Again, is that a requirement for 
MXNet users to be familiar with C++? The connection between Python and C lacks 
a IDE lint (maybe MXNet assume every developers as a VIM master). 
API/underlying implementation chaged frequently. People have to release their 
code with an achieved version of MXNet (such as TuSimple and MSRA). Let's take 
a look at PyTorch, an API used move tensor to device would raise a thorough 
discussion.

There will be more comments translated to English and I will keep this thread 
updated…
Thanks,
Qing


Re: Questions regarding C Predict and C++ API provided by MxNet.

2018-09-19 Thread Marco de Abreu
Just to make sure everybody is on the same page about versioning:

The C-API is an internal API that supports the hourglass design model and
does not fall under semantic versioning. This fact allows us to change the
input, output and behaviour of these functions without further notice.
Thus, users should never directly interact with that API.

The Cpp package is an external user-facing API. It's maintained under
semantic versioning and provides safety for our users.

Considering that design, we should make sure to collect the user-stories
which directly interface with the C-API and find out why they did that
opposed to using our public APIs. We should then make sure to include these
cases in the future.

-Marco

On Wed, Sep 19, 2018 at 12:09 PM Anton Chernov  wrote:

> Hi Hagay, hi Amol,
>
> As far as I know C API is not for inference only and it's used for training
> as well. Actually the python binding as well as most other language
> bindings work through this API.
>
> I'm aware of multiple use cases where both training and inference with
> MXNet should be done in C or C++: plugin development for applications such
> as MatLab, Adobe Photoshop, Maya, Unity etc. A lot of people doing
> simulations for robotics and reinforcement learning would want to integrate
> it on a lowest possible level as well.
>
> Best
> Anton
>
>
> ср, 19 сент. 2018 г. в 5:23, Hagay Lupesko :
>
> > Amol,
> >
> > I can try and provide my 2 cents on some of these questions:
> > - "What are the typical uses cases in which C++ (cpp-package) or C (C
> > Predict) APIs are used? For example: inference, training or both."
> > Note that the C API supports inference only.
> > From my experience as an Amazon Web Services employee, teams/customers
> who
> > used the C API used it mainly for inference. Python is much more
> convenient
> > and suitable for rapid experimentation that is important for building and
> > training models.
> >
> > - "Currently, users are required to build these APIs from source.
> > Would it be helpful if these APIs are available as standalone packages
> > distributed via package managers (example: apt-get)?"
> > I think it will reduce friction significantly if MXNet offers pre-build
> > binaries. MXNet build takes a while to build and to figure out, there's
> > quite a few build flag options, which may be intimidating for users,
> > especially new users.
> > Package managers will be great, but even just binary libraries available
> on
> > a shared location (e.g. S3) would be super useful.
> >
> > HTH,
> > Hagay
> >
> >
> > On Mon, Sep 17, 2018 at 3:23 PM Amol Lele  wrote:
> >
> > > Hello everybody,
> > >
> > >
> > >
> > > As contributor to Apache MXNet project I would like to ask community a
> > > couple of questions in regards to C Predict and C++ APIs that MXNet
> > > provides to its users. My main goal is to better understand the pain
> > points
> > > community members currently see/have with those APIs as well as to what
> > > contributions to C++ and C Predict APIs would be most beneficial to
> users
> > > who are using/tried to use these APIs of Apache MXNet.
> > >
> > > 1.   What are the typical uses cases in which C++ (cpp-package) or
> C
> > (C
> > > Predict) APIs are used? For example: inference, training or both.
> > >
> > > 2.   Which set of APIs out of C++ and C do users prefer? Preferably
> > > with reasons why.
> > >
> > > 3.   What are the frequently used platforms (Linux, Mac, Windows,
> > etc)
> > > and configurations (such as CPU, GPU, etc) on which these APIs are
> used?
> > >
> > > 4.   Currently, users are required to build these APIs from source.
> > > Would it be helpful if these APIs are available as standalone packages
> > > distributed via package managers (example: apt-get)?
> > >
> > > I would highly appreciate your replies to any or all of the above
> > > questions.
> > >
> > >
> > >
> > > Thanks,
> > >
> > > -Amol
> > >
> >
>


***UNCHECKED*** reject

2018-09-19 Thread Private LIst Moderation
%%% Start comment
This announcement has been rejected because it does not conform to release 
announcement requirements.

Specifically, there must be a link in the announcement to the downloads page, 
not to the dyn/closer page.

Downloads must be mirrored from the official Apache distribution site, not from 
github or other site.

Please change the downloads page and resubmit the announcement.

Regards,

Craig

Announcements of Apache project releases must contain a link to the relevant
download page, which might be hosted on an Apache site or a third party site
such as github.com. [1]

The download page must provide public download links where current official
source releases and accompanying cryptographic files may be obtained. [2]

Links to the download artifacts must support downloads from mirrors. Links to
metadata (SHA, ASC) must be from https://www.apache.org/dist//
** MD5 is no longer considered useful and should not be used. SHA is required. 
**
Links to KEYS must be from https://www.apache.org/dist// not release
specific.

Announcements that contain a link to the dyn/closer page alone will be
rejected by the moderators.

Announcements that contain a link to a web page that does not include a link
to a mirror to the artifact plus links to the signature and at least one sha
checksum will be rejected.

Announcements that link to dist.apache.org will not be accepted.
Likewise ones which link to SVN or Git code repos.

[1] http://www.apache.org/legal/release-policy.html#release-announcements
[2] https://www.apache.org/dev/release-distribution#download-links

%%% End comment

Craig L Russell
Secretary, Apache Software Foundation
c...@apache.org http://db.apache.org/jdo



Re: [DISCUSS] Build OSX builds in CI (possibly with TravisCI).

2018-09-19 Thread Marco de Abreu
Hey,

as of https://github.com/apache/incubator-mxnet/pull/12550, Python CPU
tests for Mac have been enabled in Travis. The first passing run is
available at
https://travis-ci.org/apache/incubator-mxnet/builds/430566392?utm_source=github_status_medium=notification
.

As stated before, we will keep the status at not-required until we are sure
the system is stable.

Again, thanks to Kellen for his efforts to get Travis up and running!

Best regards,
Marco

On Wed, Sep 19, 2018 at 5:09 AM Hagay Lupesko  wrote:

> Bravo indeed!
> Awesome work Kellen and Marco!
>
> On Tue, Sep 18, 2018 at 7:56 PM Lin Yuan  wrote:
>
> > Bravo! This is a very important piece in CI. Thanks Kellen and Marco to
> > implement it quickly.
> >
> >
> > Lin
> >
> > On Tue, Sep 18, 2018, 4:18 PM Marco de Abreu
> >  wrote:
> >
> > > Kellen has fixed the one bug in our build system and thus, there are no
> > > outstanding tests :)
> > >
> > > Exactly, it will run on branch and PR validation.
> > >
> > > Best regards,
> > > Marco
> > >
> > > sandeep krishnamurthy  schrieb am Di.,
> 18.
> > > Sep. 2018, 19:32:
> > >
> > > > This is awesome. Thanks a lot Kellen and Marco. With this work
> > complete,
> > > we
> > > > will have MXNet Python tests running for Mac on Travis CI, for PR and
> > > > Branch builds?
> > > > Thank you for working on fixing the tests and making it run as part
> of
> > > > Travis CI for Mac platform. Is there any Github issue or Jira where
> we
> > > can
> > > > see disabled / tests that needs to be fixed for Mac? This might be
> > useful
> > > > if we can call for contributions.
> > > >
> > > > Best,
> > > > Sandeep
> > > >
> > > >
> > > > On Tue, Sep 18, 2018 at 9:51 AM Marco de Abreu
> > > >  wrote:
> > > >
> > > > > Hey everyone,
> > > > >
> > > > > we are about to enable Python tests for Mac. The outstanding bugs
> > have
> > > > been
> > > > > fixed by Kellen and we're just waiting for the PRs to pass. We'll
> > send
> > > a
> > > > > separate email as soon as they are enabled.
> > > > >
> > > > > Additionally, we had a small problem that Travis runs got aborted
> if
> > > > > multiple commits were done in a short timeframe. While this is
> > > acceptable
> > > > > for PRs, this causes our branch jobs to also fail. An examples is
> > > > available
> > > > > at [1]. In order to cope with this, I have asked Apache Infra to
> > > disable
> > > > > cancellation of concurrent jobs. They agreed to this, but reminded
> us
> > > > that
> > > > > they might turn it back on if we consume too many resources.
> > > > >
> > > > > The dashboard to review the Travis resource utilization is
> available
> > at
> > > > > [2]. Just log in as Guest.
> > > > >
> > > > > Best regards,
> > > > > Marco
> > > > >
> > > > > [1]:
> > > > >
> > > > >
> > > >
> > >
> >
> https://travis-ci.org/apache/incubator-mxnet/builds/430135867?utm_source=github_status_medium=notification
> > > > > [2]:
> > > > >
> > > > >
> > > >
> > >
> >
> https://demo.kibble.apache.org/dashboard.html?page=ci=e0ce4eee89a77ec231eee1fdbbc647cb3de2f6ecfc3cef8d8c11dc2d=hour
> > > > >
> > > > >
> > > > > On Thu, Sep 13, 2018 at 1:06 AM kellen sunderland <
> > > > > kellen.sunderl...@gmail.com> wrote:
> > > > >
> > > > > > We've got fairly limited ability to change what's reported by
> > Travis.
> > > > > Most
> > > > > > administration is done by the ASF Infra crew, so it's tough for
> us
> > to
> > > > > > experiment with settings.  It'd be great if you could bear with
> us
> > > for
> > > > a
> > > > > > few days.  It shouldn't take too long to either (1) get
> > happy-feeling
> > > > > green
> > > > > > checks back, or (2) decide we don't care as much as we thought we
> > did
> > > > > about
> > > > > > MacOS support.
> > > > > >
> > > > > > On Wed, Sep 12, 2018 at 9:53 PM Aaron Markham <
> > > > aaron.s.mark...@gmail.com
> > > > > >
> > > > > > wrote:
> > > > > >
> > > > > > > Is there any way to make it not show a red X failure in the
> > GitHub
> > > UI
> > > > > > when
> > > > > > > TravisCI fails? I keep going back to check what flakey test
> > failed
> > > > this
> > > > > > > time and realizing that Jenkins is still running and it was the
> > > "not
> > > > > > > required" Travis fail. The green checkmark makes me happy and
> > it's
> > > > > easier
> > > > > > > to keep an eye on what's going on. If Travis times out a lot of
> > the
> > > > > time,
> > > > > > > then most of our PRs will look red/bad/sad when they're not.
> > > > > > >
> > > > > > > What about no failure flag set, but add a label that Travis
> > > > failed
> > > > > or
> > > > > > > if we can't control the flag, auto-set labels for each Travis
> and
> > > > > Jenkins
> > > > > > > pass/fail so we still get the benefit of at-a-glance status
> > checks.
> > > > > > >
> > > > > > > On Wed, Sep 12, 2018 at 6:04 AM Marco de Abreu
> > > > > > >  wrote:
> > > > > > >
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > Travis CI has successfully been enabled just now. This means
> > you
> > > > 

Re: multiple installation guides?

2018-09-19 Thread Aaron Markham
It's not on the site repo. Seems like it is only on the Apache infra. Can
someone request it's removal?

On Tue, Sep 18, 2018, 20:34 Hagay Lupesko  wrote:

> The /test site seems to be something old that should have been removed a
> long time ago, it lists versions 0.10 and 0.10.14 :)
> Maybe Aaron has an idea what needs to be done to remove it...
>
> On Fri, Sep 14, 2018 at 4:55 PM Alex Zai  wrote:
>
> > Why do we have two sets of installation guides?
> >
> > http://mxnet.incubator.apache.org/test/get_started/install.html
> >
> >
> https://mxnet.incubator.apache.org/install/index.html?platform=Linux=Python=CPU
> >
> > The /test domain is also not secure. If this is not suppose to be
> > public we should remove this as it is confusing.
> >
>


Re: Remove MKLML as dependency

2018-09-19 Thread Anton Chernov
MKLML is super easy to install since it's distributed with the MKL-DNN
package on GitHub [1] and this for all desktop platforms (Linux, Windows
and MacOS). Currently, I don't see a way how MKL could be automatically
installed on a Windows CI host for example. It also has the advantage of
being smaller than the whole MKL library, which is good for distribution,
while still having enough functionality it it.

I would rather be in favour of keeping it.

The unfortunate situation with the fact that MKLML is downloaded for every
cmake build will hopefully get resolved when #11148 PR [2] will be merged.

Best regards,
Anton

[1] https://github.com/intel/mkl-dnn/releases
[2] https://github.com/apache/incubator-mxnet/pull/11148


ср, 19 сент. 2018 г. в 8:31, Lv, Tao A :

> If you just want to test the performance, I think you need link MKL for
> BLAS and MKL-DNN for NN. Also MKL-DNN should link MKL for better
> performance.
>
> Here are some ways for you to install full MKL library if you don't have
> one:
> 1. Register and download from intel website:
> https://software.intel.com/en-us/mkl
> 2. Apt-get/yum: currently it need configure Intel’s repositories.
> a.
> https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-yum-repo
> b.
> https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-apt-repo
> 3. pip install mkl / mkl-devel: ‘mkl’ package has the runtime and
> ‘mkl-devel’ includes everything with the headers
> a.
> https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and
> 4. conda install: also has mkl and mkl-devel
> a. https://anaconda.org/intel/mkl
> b. https://anaconda.org/intel/mkl-devel
>
> If you want to redistribute MKL with MXNet, you may need take care of the
> license issue. Currently, MKL is using ISSL (
> https://software.intel.com/en-us/license/intel-simplified-software-license
> ).
>
> -Original Message-
> From: Zai, Alexander [mailto:alex...@amazon.com.INVALID]
> Sent: Wednesday, September 19, 2018 12:49 PM
> To: dev@mxnet.incubator.apache.org
> Subject: Re: Remove MKLML as dependency
>
> Will test it out tomorrow.
>
> On the side, what is the best way to test MKL build for MXnet. MKL is
> licensed?
>
> Best,
> Alex
>
> On 9/18/18, 7:50 PM, "Lv, Tao A"  wrote:
>
> Hi Alex,
>
> Thanks for bringing this up.
>
> The original intention of MKLML is to provide a light and
> easy-to-access library for ML/DL community. It's released with MKL-DNN
> under Apache-2.0 license.
>
> AFAIK, MKL-DNN still relies on it for better performance. So I'm
> afraid there will be a performance regression in MKL pip packages if MKLML
> is simply removed.
>
> Have you ever tried the build without MKLML and how does the
> performance look like?
>
> -tao
>
> -Original Message-
> From: Alex Zai [mailto:aza...@gmail.com]
> Sent: Wednesday, September 19, 2018 4:49 AM
> To: dev@mxnet.incubator.apache.org
> Subject: Remove MKLML as dependency
>
> On our build from source page we have a list of blas libraries that
> are recommended:
> https://mxnet.incubator.apache.org/install/build_from_source.html
>
> MKL-DNN
> MKL
> MKLML
> Apple Accelerate
> OpenBlas
>
> MKLML is a subset of MKL (https://github.com/intel/mkl-dnn/issues/102)
> and therefore MKLML users can just use MKL instead. Does anyone see an
> issue with me removing this? It would simplify out doc page and build file.
>
> Alex
>
>
>


RE: Remove MKLML as dependency

2018-09-19 Thread Lv, Tao A
If you just want to test the performance, I think you need link MKL for BLAS 
and MKL-DNN for NN. Also MKL-DNN should link MKL for better performance.

Here are some ways for you to install full MKL library if you don't have one:
1. Register and download from intel website: 
https://software.intel.com/en-us/mkl 
2. Apt-get/yum: currently it need configure Intel’s repositories.
a. 
https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-yum-repo
 
b. 
https://software.intel.com/en-us/articles/installing-intel-free-libs-and-python-apt-repo
 
3. pip install mkl / mkl-devel: ‘mkl’ package has the runtime and ‘mkl-devel’ 
includes everything with the headers
a. 
https://software.intel.com/en-us/articles/installing-the-intel-distribution-for-python-and-intel-performance-libraries-with-pip-and
 
4. conda install: also has mkl and mkl-devel
a. https://anaconda.org/intel/mkl 
b. https://anaconda.org/intel/mkl-devel 

If you want to redistribute MKL with MXNet, you may need take care of the 
license issue. Currently, MKL is using ISSL 
(https://software.intel.com/en-us/license/intel-simplified-software-license ).

-Original Message-
From: Zai, Alexander [mailto:alex...@amazon.com.INVALID] 
Sent: Wednesday, September 19, 2018 12:49 PM
To: dev@mxnet.incubator.apache.org
Subject: Re: Remove MKLML as dependency

Will test it out tomorrow. 

On the side, what is the best way to test MKL build for MXnet. MKL is licensed?

Best,
Alex

On 9/18/18, 7:50 PM, "Lv, Tao A"  wrote:

Hi Alex,

Thanks for bringing this up.

The original intention of MKLML is to provide a light and easy-to-access 
library for ML/DL community. It's released with MKL-DNN under Apache-2.0 
license.

AFAIK, MKL-DNN still relies on it for better performance. So I'm afraid 
there will be a performance regression in MKL pip packages if MKLML is simply 
removed.

Have you ever tried the build without MKLML and how does the performance 
look like?

-tao

-Original Message-
From: Alex Zai [mailto:aza...@gmail.com] 
Sent: Wednesday, September 19, 2018 4:49 AM
To: dev@mxnet.incubator.apache.org
Subject: Remove MKLML as dependency

On our build from source page we have a list of blas libraries that are 
recommended:
https://mxnet.incubator.apache.org/install/build_from_source.html

MKL-DNN
MKL
MKLML
Apple Accelerate
OpenBlas

MKLML is a subset of MKL (https://github.com/intel/mkl-dnn/issues/102)
and therefore MKLML users can just use MKL instead. Does anyone see an 
issue with me removing this? It would simplify out doc page and build file.

Alex