Hi -devel,

I've just filed an RM(#935769) bug against src:tensorflow and I believe
this is the most appropriate choice at this stage. For packages that
would easily draw attention from the media, not providing them would be
much better than providing something much inferior than the users
expected (Recall "difficulty ... DL framework" and "conda ...").


A number of packages in our archive have referenced tensorflow:

  https://codesearch.debian.net/search?q=tensorflow&perpkg=1

Or even called its C API (ffmpeg calls tensorflow for its super
resolution filter. ffmpeg package maintainers have disabled the
--enable-libtensorflow configure option). At this point, for good wish
some contributors may hope to put a little bit more effort to save the
package and at least keep its C/C++ interface available. To me avoiding
the Bazel build (the only build system for tensorflow) is costly, and
the yield isn't worth the cost. The most practical recommendation to
tensorflow users is "pip or conda".


Deep learning (DL) framework is NOT something too complex to be
implemented from scratch by a single person. A fundamental DL framework
can be implemented with the following functionalities:
 1) data loading, e.g. a CSV reader
 2) linear operations, e.g. matrix multiplication, convolution
 3) element-wise non-linear functions, e.g. max(x,0), exp(x), ln(x)
 4) the computation graph (sort of directed acyclic graph)
 5) automatic (of manual) differentiation (computing the gradient)
 6) first-order gradient-based optimization (network training)

That means tensorflow's complexity doesn't come from the theoritical
side, but engineering especially performance optimization. On the other
hand, it's easy for the users to find an alternative to tensorflow if
they don't heavily rely on some portion of its functionality.


Based on the following facts, I believe removing src:tensorflow is the
most appropriate choice at the current stage, and I DISCOURAGE any
effort trying to save or re-introduce it.

 1) TensorFlow's only well-supported build system, i.e. Bazel is
    hopeless to enter Debian.
 2) Maintaining an alternative build system (cmake, or any self-made
    one) could be costly.
 3) Even if somebody conquered the build system issue at a cost,
    it's only possible to upload the low-performance version to our main
    archive (out of SIMD acceleration due to our ISA baseline,
    out of CUDA acceleration or OpenCL acceleration).
 4) To mitigate the performance issue one could upload a CUDA version
    to contrib section, but I promise that dealing with nvidia stuff
    once anything went wrong is a painful experience to free distro
    developer.
 5) To mitigate the performance issue with OpenCL one could also try
    AMD's fully open-source ROCm/HIP software stack (AMD's opensource
    counterpart to the nvidia CUDA). However the usage of AMD graphics
    for machine learning is still not common, and none of the
    related software has been packaged yet.

With that said, I still encourage people who care about such topic to
maintain building block packages (I'm maintaining some of these) for
DL frameworks, or some alternative DL frameworks if you see appropriate.

Reply via email to