Those are compelling points! There's also another more recent follow-up
from the Julia team: https://julialang.org/blog/2018/12/ml-language-compiler
.

It seems that Julia will likely have it's place in ML regardless of how
other tools progress; the latest offerings from Julia/Flux are really
compelling.

Wondering where that leaves MxNet...

Zach Boldyga
Scalabull  |  Founder
1 (866) 846-8771 x 101


On Sat, Feb 9, 2019 at 11:02 PM Iblis Lin <ib...@hs.ntnu.edu.tw> wrote:

> (well, I'm a Julia programmer, my opinion might be quite bias. :p)
>
> No. I think Python is still dominating at this moment.
> I agree the Julia blog post about ML and PL
> (it also mentioned in that Swift artical):
>    https://julialang.org/blog/2017/12/ml&pl
>
>    (Chinese version)
>    https://julialang.org/blog/2017/12/ml&pl-cn
>    https://julialang.org/blog/2017/12/ml&pl-zh_tw
>
> TL;DR from my view:
> (Quote from the blog)
> "Any sufficiently complicated machine learning system contains an ad-hoc,
> informally-specified, bug-ridden, slow implementation of half of a
> programming language."
>
> Runtime & Ecosystem
>   Basically, I will say that TensorFlow/MXNet/PyTorch are different
> standalone
>   programming languages for specific domain -- numerical computation.
>   They use Python as their interfaces to build models.
>   Where do the models get computed? In their own runtime.
>   This runtime shares nothing with CPython's runtime.
>   User puts "+-*/" symbols and placeholders in Python,
>   but nothing is computed by CPython.
>
>   So...what's the problem about having own runtime?
>   In case of TF/MXNet/PyTorch, they splits and throws away the original
>   ecosystem.
>   For example, MXNet have its own array type 'NDArray'.
>     This type only run on our own runtime (libmxnet).
>     You have to abandon the great works done by scikit-learn from the
>     ecosystem of Scipy project, which people have already devoted tons of
>     efforts to.
>     You need to re-write a porting for NDArray if you want something like
>     Gaussion Process.
>     And this builds a wall between libmxnet and numpy runtime.
>
>   I feel so sorry about another example:
>
> https://mxnet.incubator.apache.org/versions/master/api/python/ndarray/linalg.html
>   This API was added about 1 year ago (or half year ago?).
>   It made me anxious.
>   Tons of numerical systems have more robust and versatile linear algebra
> functions.
>   But some of MXNet developers have to spend their valuable time on
> implement linalg
>   stuffs again.
>
> About Julia's ecosystem
>   (Alought selling Julia is not the point.)
>   Let's talk about what Julia comminuty have done on integrating ecosystem.
>   There is a package named Flux.jl[1].
>   It fully untilized Julia's native Array type and runtime.
>   For a CPU run, the code is written in pure Julia, and the performance is
> quite
>   competitve[2] for the case of all the code written in high-level
> language.
>   So that I can do some small experiemnts on my FreeBSD desktop
>   without compiling any C/Cpp extension.
>   For GPU run, there is a crazy package CUDANative.jl[3] to let user write
> kernel code
>   in pure Julia. It leverages LLVM's PTX backend.
>   This package is baked by JuliaGPU[4] comminuty.
>   About AD stuffs, it's supported by another group of poeple from
> JuliaDiff [5],
>   who is doing reseaches on ODE/PDE.
>   Flux integrates them all and become a part of ecosystem as well.
>   If user want to use some exotic statistic distributions, just plug the
> another
>   package from JuliaStats[6].
>
> > Any plans to take an approach similar to this for the MxNet library?
>
>   TBH, I'm selfish. My answer is Julia. I only care about Julia stuffs.
>   I'm trying to make more re-use of interfaces from Julia's stdlib and
> runtime.
>   It' challange. I hope the MXNet Julia package is more than a binding and
>   connecting with the ecosystem.
>
> So... you might ask that why I'm here and work on MXNet?
>   I want to increase the entroy of DL tools in Julia.
>   I think freedom is the symbol in the world of open source,
>   user should always have anothr choice on softwares.
>   I personally dislike the state of TF -- being a huge, closed ecosystem.
>   Many poeple is porting stuffs into TF's system and nothing fed back
>   (<del> the backprop got truncated :p </del>).
>   I think Julia can find a balance point between MXNet's and original
> ecosystem.
>
>
> [1] https://fluxml.ai/
> [2]
> https://github.com/avik-pal/DeepLearningBenchmarks#cpu-used-----intelr-xeonr-silver-4114-cpu--220ghz
> [3] https://github.com/JuliaGPU/CUDAnative.jl
> [4] https://github.com/JuliaGPU
> [5] https://github.com/JuliaDiff
> [6] https://github.com/JuliaStats
>
> Iblis Lin
> 林峻頤
>
> On 2/10/19 4:08 AM, Zach Boldyga wrote:
> > Any plans to take an approach similar to this for the MxNet library?
> >
> >
> https://github.com/tensorflow/swift/blob/master/docs/WhySwiftForTensorFlow.md
> >
> > -Zach
> >
>

Reply via email to