Re: designing dh_pytorch for PyTorch reverse dependencies

2023-09-11 Thread M. Zhou
On Mon, 2023-09-11 at 11:15 +0300, Andrius Merkys wrote:
> 
> > 
> > Then it is the dh_pytorch processing logic in pseudo code:
> > (dh_pytorch is inserted before dh_gencontrol)
> > 
> > for each binary package $pkg {
> > 
> >    if $pkg.architecture is all {
> >  # does not require specific variant
> >  append the following to python3:Depends
> >  "python3-torch | python3-torch-api-2.0"
> 
> Why not just python3-torch-api-2.0? AFAIU, python3-torch will provide
> python3-torch-api-2.0 as well. Or is this here to indicate
> preference?

If I remember correctly the first package in the Depends field
cannot be a virtual package, as per our policy. At least we do
not use the virtual package libblas.so.3 as the first dependency
in any BLAS/LAPACK reverse dependencies.

And using python3-torch as the first one will implicitly declare
preference over the "free" variant. Say, we have three packages
providing python3-torch-api-2.0:

  python3-torch
  python3-torch-cuda
  python3-torch-rocm

Then which one is preferred to fulfill the virtual api package?

> >    } else {
> >  # for arch-any package
> > 
> >  if $pkg.substvars contains "libtorch*" {
> >    # this variant-specific dependency
> >    do nothing to python3:Depends.
> >    Just use the default dependency.
> >  
> >  } else {
> >    # does not contain "libtorch*" dependency
> >    # that indivates variant-agnostic package.
> >    append the following to python3:Depends
> >    "python3-torch | python3-torch-api-2.0"
> 
> Am I right that src:pytorch-vision would fall in this category? In
> that 
> case python3-torch would fill the dependency requirement, but will
> the 
> binary package get the needed shared libraries in all cases?

That's a good question. src:pytorch-vision is exactly the special
cases in my mind. src:pytorch-vision depends on the libtorch2.0
ABI instead of the python API. So python3-torch-api-2.0 cannot fulfill
its requirement.

You can find from the pytorch official website (https://pytorch.org/)
that they always instruct users to install torch, torchvision,
torchaudio at the same time because the CPU-only build of torchvision
is not compatible with CUDA build of torch. The three package
has to follow the default dependency resolution from dpkg.

The core principal for deciding the API/ABI dependency over
pytorch is simple.

  1. as long as the reverse dependency writes some C/C++/... extension
 using libtorch-dev or pytorch C++ extension utility, it will need
 the ABI compatibility.

  2. as long as the reverse dependency do not contain compiled modules
 that is linked against libtorch2.0, it is agnostic to pytorch
 variant and the api virtual package should be sufficient.
 (even if in this case the package contains some cython modules
  not linked against libtorch2.0)

So, that's the reason why we need to analyse the output of
dpkg-shlibdeps. If there is dependency on libtorch2.0 or
libtorch-cuda-2.0, then it is seen as variant-specific and should
be kept unmodified.
> 



designing dh_pytorch for PyTorch reverse dependencies

2023-09-09 Thread M. Zhou
Hi folks,

I'm writing down my draft design for dh_pytorch here in order to hear
some comments or feedbacks. If you maintain a package that depends
on python3-torch or relevant pytorch packages, please let me have
your attention.

We need a custom dh module for pytorch because we have multiple
pytorch variants in our repository, which might or might not
fill the reverse dependency requirements of its reverse dependencies.
PyTorch is somewhat similar to numpy, but we have to implement
something different than dh_numpy.

For instance, a reverse dependency might be agnostic to the actual
variant of pytorch. An example is src:pytorch-ignite. It provides
a higher-level abstract to the pytorch API, and contains no arch
specific file. So it can depend on any pytorch variant, e.g.,
  Depends: python3-torch | python3-torch-cuda | python3-torch-...

A reverse dependency might be depending on a particular variant
as well. An example is src:pytorch-vision. It compiles binaries
against libtorch2.0 (with B-D: libtorch-dev). In that case,
the cpu variant (python3-torch) will not be able to fill the
dependency requirement.

My proposed solution for automatically filling in the pytorch
dependency is as follows:

Firstly, every pytorch variant will provide a virtual package
  python3-torch-api-2.0
to declare that the package provides the high level API of pytorch.

Then it is the dh_pytorch processing logic in pseudo code:
(dh_pytorch is inserted before dh_gencontrol)

for each binary package $pkg {

  if $pkg.architecture is all {
# does not require specific variant
append the following to python3:Depends
"python3-torch | python3-torch-api-2.0"

  } else {
# for arch-any package

if $pkg.substvars contains "libtorch*" {
  # this variant-specific dependency
  do nothing to python3:Depends.
  Just use the default dependency.

} else {
  # does not contain "libtorch*" dependency
  # that indivates variant-agnostic package.
  append the following to python3:Depends
  "python3-torch | python3-torch-api-2.0"
  }
}

The $pkg.substvars files are generated by dpkg-shlibdeps.


As a result, the src:pytorch-ignite, a variant-agnostic reverse
dependency, will be able to depend on any variant that provides
python3-torch-api-2.0, including python3-torch, python3-torch-cuda,
python3-torch-rocm, etc.

Meanwhile, the src:pytorch-vision, a variant-specific reverse
depenedency, will simply require python3-torch-cuda.
(I will upload the cuda version of torchvision as
python3-torchvision-cuda in the future)

I don't have much experience in writing custom debhelper modules.
Any comments or suggestions are welcome.
The perl implementation is work in progress.

Thanks



Overhaul of the intel-mkl package

2023-07-20 Thread M. Zhou
Hi team,

This is a message on my future plan to overhaul the intel-mkl package.
Some people (bcc'ed) from intel are interested as well,
so I'm sharing the plan to a public list.

The current MKL version in our archive has been outdated for 3 years
https://salsa.debian.org/science-team/intel-mkl
largely because of Intel's OneAPI transition, significantly changed
the components and the upstream package layout.

These are the next steps to overhaul the package to make it more
maintainable:

1. Greatly simplify the binary package layout.
   We split the upstream package into very fine-grained packages.
   But it is not really that necessary. For the next update,
   we should trim the binary packages into only two:

bin:libmkl-rt
bin:libmkl-dev

   All the other packages will be deleted and replaced by the new ones.

2. Different from the intel's upstream .deb packages, we will make
   the installed files follow the FHS.

3. Different from intel's upstream installation, we will not install
   all the SYCL and OpenCL related binary files. SYCL is not yet
   incorporated into Debian.

   So currently we can only make it CPU-only.

4. Different from the upstream .deb packages, we integrate the
   libmkl-rt package into our update-alternatives system for
   libblas.so and liblapack.so . I'll still keep it in shape
   for the next upload.

That should be it. The current packaging of intel-mkl is just
overcomplicated. The python script for generating the installation
control files should be replaced by Makefile/bash as well.



Re: Removing ATLAS?

2023-07-17 Thread M. Zhou
On Fri, 2023-07-14 at 01:51 +0200, Sébastien Villemot wrote:
> 
> Your fix looks good. Note that an even better fix is to simply Build-
> Depend on libblas-dev. Linking against an optimized BLAS does not
> really help at build time, because since all variants are ABI
> compatible and use the same SONAME, it’s the runtime dependency that
> really matters.
> 

I agree that an optimized blas is not necessary as build-depeneds.
Just want to mention for some computational intensive packages
during dh_auto_test, an optimized BLAS may help. Or the tests
can take forever to run.



Re: Removing ATLAS?

2023-07-10 Thread M. Zhou
I agree. The usage of ATLAS is more suitable for source based distros
like Gentoo. Plus, according to my past benchmarks, ATLAS, even if
compiled locally with -march=native flags, still falls behind OpenBLAS
and BLIS in terms of performance.

Both OpenBLAS and BLIS are still healthy, actively maintained.
So I agree it is time to let old libraries fade away.

BTW, deprecating ATLAS can also help us remove the libcblas.so
as well as fixing its reverse dependencies to use the correct libblas.so.



On Sat, 2023-07-08 at 10:01 +0200, Sébastien Villemot wrote:
> Hi,
> 
> As the maintainer of the atlas package over the last decade, I now
> wonder whether we should remove it from the archive.
> 
> As a reminder, ATLAS is an optimized BLAS implementation, that fits
> into our BLAS/LAPACK alternatives framework.¹ Its strategy for
> achieving good performance is to adjust various internal array sizes
> (at build time) so that they fit in the processor cache. It was
> probably the first optimized BLAS added to Debian (in 1999).
> 
> Today, the project looks dead. The last stable release (3.10.3)
> happened in 2016. The last development release (3.11.41, not packaged)
> was in 2018. The git repository has seen no commit since 2019.²
> 
> Moreover, there are better alternatives. Most people today use
> OpenBLAS. There is also BLIS, which can in particular be used on
> architectures not supported by OpenBLAS.
> 
> Also note that ATLAS has never been really well-suited to our
> distribution model. To get the most of ATLAS, you have to recompile it
> locally using the specific CPU that you want to target; a generic
> binary package like the one we distribute is a suboptimal solution,
> since it is not adapted to the local CPU cache size.
> 
> So, given all that, I’m inclined to (try to) remove atlas during the
> trixie development cycle.
> 
> There are quite a few package which (build-)depend on atlas, I attach a
> list. But my guess is that these should be easily fixable, because most
> (if not all) do not require ATLAS specifically. One should normally not
> need to build-depend on atlas, since all our BLAS implementations are
> ABI-compatible (build-depending on libblas-dev should give an
> equivalent binary, unless one is doing static linking). For the
> dependencies of binary packages, I guess those were added to ensure
> that the user has an optimized BLAS installed; so they can probably be
> replaced by something like libopenblas0 | libblis4 (keeping in mind
> that since BLAS/LAPACK implementations are managed by the alternatives
> system, a dependency relationship cannot enforce the implementation
> used at runtime on the user machine).
> 
> Any thought on this?
> 
> Cheers,
> 
> ¹ https://wiki.debian.org/DebianScience/LinearAlgebraLibraries
> ² https://github.com/math-atlas/math-atlas/
> 



Re: How to build compatible packages that use Eigen?

2023-05-04 Thread M. Zhou
I did something similar. Here is my example:

The linker script is here:
https://salsa.debian.org/science-team/blis/-/blob/master/debian/version_script.lds

This script is used like this:
https://salsa.debian.org/science-team/blis/-/blob/master/debian/patches/libblas-provider.patch

This patch aims to hide some ABIs from the external calls.


On Wed, 2023-05-03 at 22:07 -0400, Aaron M. Ucko wrote:
> Dima Kogan  writes:
> 
> > Thanks for replying
> 
> No problem.
> 
> > Sorry, I'm not familiar-enough with linker scripts. I would pass this to
> > the linker when building libg2o.so? Or the end-user would need to use
> > this when build-time linking their application? The run-time dynamic
> > linker doesn't need this, right?
> 
> AIUI, you'd supply it when building libg2o.so itself, via the
> --version-script linker option (or as an implict linker script, but I'd
> favor being more explicit here).
> 




Re: How to build compatible packages that use Eigen?

2023-05-04 Thread M. Zhou
Don't know how to address this issue but have some relevant comments.

Eigen library does not support run time dispatch for CPU ISAs.
When a package is built upon the amd64 baseline ISA but ran on a modern
CPU, the performance can be very poor.
This is why I build the tensorflow and pytorch packages against
libblas.so.3 (through update-alternatives).
A good BLAS implementation is usually faster than the Eigen
compiled in native ISA. For example, openblas.

Check if the library in question supports building against BLAS/LAPACK
instead of Eigen. Good luck if the upstream does not support that.

By the way, since there is no runtime ISA dispatch, the "-mavx" flag
is likely a baseline violation with RC severity.

I don't know whether Eigen implemented the dispatch in the latest
version. But the current state of this library still seems to dispatch
by preprocessor.

On Wed, 2023-05-03 at 14:18 -0700, Dima Kogan wrote:
> Hi. I'm packaging something that uses Eigen, and I'm running into a
> persistent compatibility issue I don't currently know how to solve. Help
> appreciated.
> 
> Here's the problem. Eigen is a C++ header-only library that's heavy into
> templating. So all the functions inside Eigen produce weak symbols, and
> usually the linker will see many identical copies of the same weak
> symbol, from each compile unit and shared object being linked. The
> linker picks ONE of the weak definitions. This is the intended behavior
> in C++ because every copy is supposed to be identical. But in Eigen
> they're not identical: it does different things based on preprocessor
> defines, and you get crashes.
> 
> Here's a simplified illustration of what happens.
> 
> 
> eigen3/Eigen/src/Core/util/Memory.h contains:
> 
>   EIGEN_DEVICE_FUNC inline void* aligned_malloc(std::size_t size)
>   {
> check_that_malloc_is_allowed();
> 
> void *result;
> #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
> 
>   EIGEN_USING_STD(malloc)
>   result = malloc(size);
> 
>   #if EIGEN_DEFAULT_ALIGN_BYTES==16
>   eigen_assert((size<16 || (std::size_t(result)%16)==0) && "System's 
> malloc returned an unaligned pointer. Compile with 
> EIGEN_MALLOC_ALREADY_ALIGNED=0 to fallback to handmade aligned memory 
> allocator.");
>   #endif
> #else
>   result = handmade_aligned_malloc(size);
> #endif
> 
> if(!result && size)
>   throw_std_bad_alloc();
> 
> return result;
>   }
> 
>   EIGEN_DEVICE_FUNC inline void aligned_free(void *ptr)
>   {
> #if (EIGEN_DEFAULT_ALIGN_BYTES==0) || EIGEN_MALLOC_ALREADY_ALIGNED
> 
>   EIGEN_USING_STD(free)
>   free(ptr);
> 
> #else
>   handmade_aligned_free(ptr);
> #endif
>   }
> 
> The EIGEN_DEFAULT_ALIGN_BYTES and EIGEN_MALLOC_ALREADY_ALIGNED macros
> can vary based on things like __SSE__ and __AVX__ and such.
> 
> Now let's say you're packaging a library. Let's call it libg2o. This is
> NOT header-only, and somewhere it does #include  which
> eventually includes Memory.h. The libg2o.so that ends up in the
> "libg2o0" package then gets a weak symbol for "aligned_malloc" and
> "aligned_free" that encodes the compiler flags that were used when
> building libg2o.so.
> 
> So far so good.
> 
> Now let's say you have a user. They're writing a program that uses both
> libg2o and Eigen. They're writing their own application, not intended to
> go into Debian. So they build with -msse -mavx and all the other fun
> stuff. THEIR weak copies of "aligned_malloc" and "aligned_free" are
> different and incompatible with the copies in libg2o. And the
> application is then likely to crash because at least something somewhere
> will be allocated with one copy and deallocated with another.
> 
> This is just terrible design from the eigen and c++ people, but that's
> what we have. Has anybody here run into this? How does one build the
> libg2o package so that users don't crash their application when using
> it? I tried to demand maximum alignment in libg2o, which fixes some
> things but not all. Currently debugging to find a better solution, but I
> suspect somebody has already fought this.
> 
> Thanks
> 




Re: How much do we lose if we remove theano (+keras, deepnano, invesalius)?

2023-01-14 Thread M. Zhou
Currently, I'd say PyTorch and TensorFlow are the two most
popular libraries. And I even worry google is trying to
write something new like Jax to replace TensorFlow in some aspects.

On Sat, 2023-01-14 at 11:12 +, Rebecca N. Palmer wrote:
> theano has been mostly abandoned upstream since 2018.  (The Aesara fork 
> is not abandoned, but includes interface changes including the import 
> name, so would break reverse dependencies not specifically altered for it.)
> 
> Its reverse dependencies are keras, deepnano and invesalius.
> 
> It is currently broken, probably by numpy 1.24 (#1027215), and the 
> immediately obvious fixes weren't enough 
> (https://salsa.debian.org/science-team/theano/-/pipelines).
> 
> Is this worth spending more effort on fixing, or should we just remove it?
> 



Re: OneTBB migration to testing stalled

2022-09-07 Thread M. Zhou
Control: reassign -1 src:binutils 2.38.90.20220713-2

I believe this issue is a binutils regression instead of GCC-12
regression. The default linker ends up with segmentation fault
on ppc64el. However, if I change the default linker from bfd to
gold, the issue is temporarily bypassed in onetbb 2021.5.0-14.

https://salsa.debian.org/science-team/tbb/-/commit/ad1fe7e7021a37b63f8c7a2b4dc0c766828e7758

I have uploaded -14 to experimental and it passed the NEW queue
lightning fast. I shall upload -15 to unstable as long as it
becomes green on all architectures.

On Sun, 2022-09-04 at 10:59 -0400, M. Zhou wrote:
> Control: affects -1 src:onetbb
> 
> It's due to a regression bug in GCC-12 that caused linker internal
> error on ppc64el:
> https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1017772
> Does not look like something I can look into.
> 
> If you need it soon in testing, please go ahead and downgrade
> compiler
> to gcc-11 for ppc64el only.
> 
> On Sun, 2022-09-04 at 10:50 +0530, Nilesh Patra wrote:
> > Hi Mo,
> > 
> > It seems that the migration of oneTBB to testing is stalled (since
> > 16
> > days) due
> > to FTBFS on ppc64el with some linker errors[1]
> > I am not sure what is up there, could you please take a look?
> > 
> > [1]:
> > https://buildd.debian.org/status/fetch.php?pkg=onetbb=ppc64el=2021.5.0-13=1662266636=0
> > 
> 
> 




Re: OneTBB migration to testing stalled

2022-09-04 Thread M. Zhou
Control: affects -1 src:onetbb

It's due to a regression bug in GCC-12 that caused linker internal
error on ppc64el:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=1017772
Does not look like something I can look into.

If you need it soon in testing, please go ahead and downgrade compiler
to gcc-11 for ppc64el only.

On Sun, 2022-09-04 at 10:50 +0530, Nilesh Patra wrote:
> Hi Mo,
> 
> It seems that the migration of oneTBB to testing is stalled (since 16
> days) due
> to FTBFS on ppc64el with some linker errors[1]
> I am not sure what is up there, could you please take a look?
> 
> [1]:
> https://buildd.debian.org/status/fetch.php?pkg=onetbb=ppc64el=2021.5.0-13=1662266636=0
> 




Re: scikit-learn testing migration

2022-07-28 Thread M. Zhou
I have a long-term power 9 VM (not QEMU) as testbed.
I'm trying to investigate the issues for release architectures,
but this package is too slow to build with QEMU (multiple hours).
(abel.debian.org is also extremely slow for scikit-learn)
I've not yet given up, but the build speed means I cannot
address this issue in timely manner.

On Thu, 2022-07-28 at 10:15 +0200, Andreas Tille wrote:
> Hi Graham,
> 
> Am Thu, Jul 28, 2022 at 09:15:06AM +0200 schrieb Graham Inggs:
> > Hi
> > 
> > On Wed, 27 Jul 2022 at 17:57, M. Zhou  wrote:
> > > The previous segfault on armel becomes Bus Error on armel and armhf.
> > > I can build it on Power9, but it seems that the test fails on power8 (our 
> > > buildd).
> > 
> > In #1003165, one of the arm porters wrote they are happy to look at
> > the bus errors, but the baseline issue should be fixed first.
> 
> ... this was five months ago and silence since then.  We've lost lots of
> packages in testing and I see no progress here.  It seems upstream is not
> actually keen on working on this as well.  Meanwhile they stepped forward
> with new releases and I simply refreshed the issues for the new version
> (which are the same and not solved).
> 
> Currently we have bus errors on arm 32 bit architectures and a baseline
> violation on power.  If there is no solution at the horizon I'd vote for
> excluding these three architectures instead of sit and wait (which is all
> I can personally do in this topic).
>  
> > > I have skimmed over the build logs and one of the main issues is the use 
> > > of
> > > -march flags to enforce a certain baseline [1]:
> > > 
> > > powerpc64le-linux-gnu-gcc: error: unrecognized command-line option 
> > > ‘-march=native’; did you mean ‘-mcpu=native’?
> > 
> > This may be the cause of the test failures on power8.
> 
> Could someone give this a try?  I know I could use a porter box to do
> so but my time is to limited to do it in a sensible time frame.
> 
> Kind regards
> 
>   Andreas. 
> 



Re: scikit-learn testing migration

2022-07-27 Thread M. Zhou
The previous segfault on armel becomes Bus Error on armel and armhf.
I can build it on Power9, but it seems that the test fails on power8 (our 
buildd).

On Wed, 2022-07-27 at 09:56 +0200, Andreas Tille wrote:
> Control: tags -1 unreproducible
> Control: tags -1 moreinfo
> Control: severity -1 important
> 
> Hi,
> 
> BTW, there is another bug in scikit-learn, but I can't reproduce it and
> have set tags accordingly.  Could someone else please give it a try?
> 
> Kind regards
> 
>  Andreas.
> 
> Am Wed, Jul 20, 2022 at 09:23:28PM +0200 schrieb Andreas Tille:
> > Hi Nilesh,
> > 
> > Am Wed, Jul 20, 2022 at 06:21:19PM +0530 schrieb Nilesh Patra:
> > > On 7/20/22 4:50 PM, Andreas Tille wrote:
> > > > Before we stop progress in Debian for many other architectures since we
> > > > cant't solve this on our own or otherwise are burning patience of
> > > > upstream I'd alternatively consider droping armel as supported
> > > > architecture.
> > > 
> > > I am definitely +1 for this, however scikit-learn is a key package and 
> > > dropping
> > > it from armel would mean dropping several revdeps as well.
> > > I am a bit unsure if that is fine or not.
> > 
> > Its not fine at all and I would not be happy about it.  However, the other
> > side of a key package is, that lots of package have testing removal warnings
> > on architectures which are widely used and we have real trouble because of
> > this.
> > 
> > Kind regards
> > 
> >   Andreas.
> > 
> > -- 
> > http://fam-tille.de
> > 
> > 
> 



Re: TBB package update

2022-06-18 Thread M. Zhou
There is a tbb -> onetbb transition guide:
https://oneapi-src.github.io/oneTBB/main/tbb_userguide/Migration_Guide.html

On Sat, 2022-06-18 at 14:42 +0200, Andreas Tille wrote:
> Hi,
> 
> Am Wed, Jun 08, 2022 at 08:46:27AM +0300 schrieb Andrius Merkys:
> > On 2022-06-07 22:11, Nilson Silva wrote:
> > > I would like to know when the team will upload the new version of TBB to
> > > the debian repositories.
> > 
> > As Mo wrote, you may find onetbb/2021.5.0-9 in experimental. Transition
> > experimental -> unstable is in planning now, you may monitor the
> > progress at #1007222.
> 
> Is there some kind of "porting to latest tbb FAQ"?  For instance I
> wonder how to fix issues like in twopaco[1] or the other bugs files
> against several packages.  I personally have no idea how to deal
> about tbb and I'm wondering if there are some "easy cases" I could
> fix with some little doc.
> 
> Kind regards
> 
>  Andreas.
> 
> PS: For sure patches for the open bugs are always welcome and would
> speed up the transition.
> 
> 
> [1] https://salsa.debian.org/med-team/twopaco/-/jobs/2894511
> 



Re: TBB package update

2022-06-07 Thread M. Zhou
~ ❯❯❯ apt list libtbb-dev -a (base) 20:49
Listing... Done
libtbb-dev/experimental 2021.5.0-9 amd64
libtbb-dev/unstable,now 2020.3-2.1 amd64 [installed,automatic]


On Tue, 2022-06-07 at 19:11 +, Nilson Silva wrote:
> 
> Good evening!
> 
> I would like to know when the team will upload the new version of TBB to the 
> debian repositories.
> 
> Salsa version: onetbb (2021.5.0-9)
> 
> Debian: tbb (2020.3-2.1)
> 
> I have a package that depends on this new version.
> 
> I'm waiting
> 



Re: [Pkg-julia-devel] I'm planning to RM src:julia . Can't bear anymore.

2022-05-20 Thread M. Zhou
Hi Norbert,

It's good to see you around, and thanks for the comment!
Let's see how the other people think.

Basically at this point I think Debian's strict policy
feels like a double-blade sword. It is good as long as we
want to make elegant offline builds and ensure reproducibility.
It is increasing work when the upstream goes in the opposite
direction, while Debian is seemingly the only distribution
that has restrictions on network access during the build.

Best wishes!


On Sat, 2022-05-21 at 09:21 +0900, Norbert Preining wrote:
> Hi Mo,
> 
> > If no one is willing to save it ... I shall file an RM bug
> > against src:julia .
> 
> I completely agree. I have tried the same with Julia 1.7.N several
> times, but the list was getting longer and longer.
> 
> Although I am not involved anymore, as a former contributor, I
> strongly
> support removing julia.
> 
> The binary packages available work without any problems.
> 
> All the best
> 
> Norbert
> 
> --
> PREINING Norbert 
> https://www.preining.info
> Mercari Inc. + IFMGA Guide + TU Wien + TeX
> Live
> GPG: 0x860CDC13   fp: F7D8 A928 26E3 16A1 9FA0 ACF0 6CAC A448 860C
> DC13




I'm planning to RM src:julia . Can't bear anymore.

2022-05-20 Thread M. Zhou
Hi,

Long story short. Julia debian package seriously lacks of volunteer
to work on it and keep it up to date properly. Initially I planned
to get this package to the latest 1.7.3 version and solve a bunch
of bugs.

Then I discovered that the latest version downloads a million
code snapshots that are specific to julia. I gave up halfway
at nearly 16 patches named "no-download-xxx". And now all
the XXX.jl standard libraries will not be downloaded.

Then I just figure out it started to download XXX.jll .
I roughly estimate that there will be eventually more than 50
patches named as
  no-download-xxx
and more than 50 embedded julia-specific artifact.

Julia itself has became a standalone binary file distribution
and formed its own ecosystem. We should use upstream
prebuilt binaries instead of trying to build on our own
and trap ourselves in the BLAS/LAPACK ILP64 pitfalls.

If no one is willing to save it ... I shall file an RM bug
against src:julia .
Without update, it build-depends on llvm-9, which is already
removed from unstable. That means even if we stay at an old
version, it's still a seriously broken package.


In order to avoid wasting time, I suggest everyone think
carefully when dealing with packages that will download
a million files from internet during the build process,
like bazel+tensorflow, julia, etc.

I give up.



Re: Bug#1000336: Upgrading tbb

2022-03-13 Thread M. Zhou
Hi,

Recently I'm not able to test the build of libtbb-dev's reverse dependencies
as my build machine was out of access. That blocks my submission of the
transition bug and hence I'm stalled at this point.
According to some archlinux developers, this transition breaks a lot of
reverse dependencies since some of the core APIs have been changed.
Please expect a relatively negative rebuild result.

Help is welcome.

On Mon, 2022-03-14 at 01:30 +0530, Nilesh Patra wrote:
> Hi Mo,
> 
> On 2/23/22 11:01 AM, M. Zhou wrote:
> > Hello guys. Finally it's all green on our release architectures
> > https://buildd.debian.org/status/package.php?p=onetbb=experimental
> > 
> > I shall request the slot for transition once finished the rebuild
> > of its reverse dependencies and filed FTBFS bugs if any.
> 
> Did you get a chance to do this yet?
> As we _really_ need numba at this point.
> 
> Regards,
> Nilesh
> 
> 



Re: Bug#1000336: Upgrading tbb

2022-02-22 Thread M. Zhou
Hello guys. Finally it's all green on our release architectures
https://buildd.debian.org/status/package.php?p=onetbb=experimental

I shall request the slot for transition once finished the rebuild
of its reverse dependencies and filed FTBFS bugs if any.

On Tue, 2022-02-08 at 17:59 -0500, M. Zhou wrote:
> Hi Diane,
> 
> Thank you. I have added that patch in the git repository.
> 
> On Tue, 2022-02-08 at 13:49 -0800, Diane Trout wrote:
> > Hi,
> > 
> > After Andreas pointed it out I looked through some of the build
> > failures for onetbb and talked to upstream about the i386 failure.
> > https://github.com/oneapi-src/oneTBB/issues/370#issuecomment-1030387116
> > 
> > They have a patch.
> > https://github.com/oneapi-src/oneTBB/commit/542a27fa1cfafaf76772e793549d9f4d288d03a9
> > 
> > I've tested it against a checkout of the 2021.5.0-1 version of onetbb
> > on i386 and it does build with it applied. Once there was a test
> > failure, and once all tests ran successfully
> > 
> > I see that you've made some more progress for the memory alignment
> > bugs
> > so I didn't commit "Detect 32 bit x86 systems while adding -mwaitpkg
> > option" i386 patch but could if you want.
> > 
> > Diane
> > 
> > 
> 



Re: Google Summer of Code, Debian Science

2022-02-22 Thread M. Zhou
On Tue, 2022-02-22 at 12:52 +0100, Drew Parsons wrote:
> 
> Also useful to report their performance with the various BLAS 
> alternatives.

Looking forward to it. I'm glad to provide pointers in terms of
our BLAS and LAPACK packaging.



Re: Bug#1000336: Upgrading tbb

2022-02-08 Thread M. Zhou
Hi Diane,

Thank you. I have added that patch in the git repository.

On Tue, 2022-02-08 at 13:49 -0800, Diane Trout wrote:
> Hi,
> 
> After Andreas pointed it out I looked through some of the build
> failures for onetbb and talked to upstream about the i386 failure.
> https://github.com/oneapi-src/oneTBB/issues/370#issuecomment-1030387116
> 
> They have a patch.
> https://github.com/oneapi-src/oneTBB/commit/542a27fa1cfafaf76772e793549d9f4d288d03a9
> 
> I've tested it against a checkout of the 2021.5.0-1 version of onetbb
> on i386 and it does build with it applied. Once there was a test
> failure, and once all tests ran successfully
> 
> I see that you've made some more progress for the memory alignment
> bugs
> so I didn't commit "Detect 32 bit x86 systems while adding -mwaitpkg
> option" i386 patch but could if you want.
> 
> Diane
> 
> 




Re: onetbb_2021.4.0-1~exp1_amd64.changes REJECTED

2022-02-03 Thread M. Zhou
Hi,

Thanks a lot to Scott for the review! 
@Andreas: Thanks, please go ahead. Ping me if it FTBFS or any patch
needs a rebase.

On Thu, 2022-02-03 at 17:22 +0100, Andreas Tille wrote:
> Hi Scott,
> 
> Am Thu, Feb 03, 2022 at 03:00:08PM + schrieb Scott Kitterman:
> > 
> > Normally for a package already in Debian I wouldn't reject due to
> > copyright/
> > license documentation, but I am making an exception is this case. 
> > I only
> > started to look at this package by doing grep -ir copyright * over
> > the source.
> > I made redirected the output of that to a file and made a list of
> > all the
> > copyright notices that are not currently reflected in
> > debian/copyright.  It
> > has 806 lines.
> > 
> > It looks clear to me that either the package has been completely
> > reworked by
> > upstream and the maintainer didn't check or it's been years (looks
> > like five)
> > since anyone looked at debian/copyright.
> > 
> > Please fix and reupload.  This package needs a comprehensive review
> > of
> > copyright and licensing and the documentation of the results in
> > debian/
> > copyright per policy.
> 
> Thanks a lot for your review.  I think I've fixed d/copyright in
> Git[1].
> 
> @Mo: I had trouble with pristine-tar to extract the source tarball of
> version 2021.4.0-1 which you uploaded.  Since there is a new upstream
> version meanwhile I'm currently building 2021.5.0-1 with the
> intention
> to upload it to experimental via NEW.
> 
> Kind regards
> 
>    Andreas.
> 
> [1]
> https://salsa.debian.org/science-team/tbb/-/blob/master/debian/copyright
> 




Re: alphafold Debian packaging ?

2022-01-12 Thread M. Zhou
Even more complicated is the underlying software dependency tree.

alphafold depends on dm-haiku, jax, tensorflow.
dm-haiku depends on jax.
jax depends on XLA from tensorflow.
tensorflow still in NEW.

Long way to go. Mhhh.

What's also complicated is the GPU support. Currently the only
working modern deep learning framework in our archive is pytorch,
which is only compiled with cpu support.

pytorch-cuda requires cudnn. I gave up cudnn packaging a few
times and I eventually realized that I dislike working on
nvidia stuff even if I have to use it.

pytorch-rocm is a good way to go. As you can see on debian-ai@
people are still working hard to get ROCm into debian.

Intel GPU support is too new to evaluate.

On Wed, 2022-01-12 at 16:54 +0100, Gard Spreemann wrote:
> 
> Andrius Merkys  writes:
> 
> > On 2022-01-12 17:34, Gard Spreemann wrote:
> > > And their code repository is Apache. Or did you find the actual
> > > pretrained models somewhere under CC-BY-NC?
> > 
> > Interesting. Maybe I am looking at some other source. Am I right to
> > assume we are talking about [3]? If so, it says that the parameters
> > are
> > CC-BY-NC here: [4].
> > 
> > [3] https://github.com/deepmind/alphafold
> > [4] https://github.com/deepmind/alphafold#model-parameters
> 
> Interesting indeed! So we have:
> 
>  – Training data: A plethora of different licenses.
> 
>  – Code: Apache
> 
>  – Trained model: CC-BY-NC-4.0
> 
>  – Output of said trained model: CC-BY-4.0 [5]
> 
> Nightmarish!
> 
> [5] See under "license and attributions" on https://alphafold.com
> 
> 
>  -- Gard
> 
> 




Re: Bug#1000336: Upgrading tbb

2022-01-08 Thread M. Zhou
Hi all,

The good news is that I managed to upgrade onetbb. It 
is in the NEW queue now:
https://ftp-master.debian.org/new/onetbb_2021.4.0-1~exp1.html
All changes have been pushed onto salsa (master branch).

SOVERSION was bumped from 2 to 12 so NEW is inevitable.
There are also some non-trivial API changes. So I guess the
transition won't be easy.

On Wed, 2021-12-29 at 23:27 -0800, Diane Trout wrote:
On Thu, 2021-12-23 at 11:03 -0500, M. Zhou wrote:
> Hi all,
> 
> I'm back.
> 
> I've just finished my final exams so I could do something during
> the holiday. That TBB repository is still work-in-progress and
> FTBFS from the master branch is something expected. I will finalize
> it soon. Andreas said in previous posts that we prefer a faster
> NEW queue process. I understand that but we cannot bypass NEW
> process
> this time as upstream has bumped the SONAME. So I'm changing the
> source name as well following the upstream since NEW is inevitable.
> 
> As for llvmlite, the latest upstream RC release v0.38.0rc1 seems
> to support python 3.10 . Should I upload the RC release?
> 
> BTW, what else should I do? I've been out of sync from the mailing
> list for a long while.


Have you managed to make much progress?

I fiddled with the packaging and got it to build and trying to run
the
autopkgtests with 2021.4.0-1

What'd help me is to have a package I could build locally and test
numba against. If you got it working could you commit what you have
to
a salsa branch and let me know where it is?

Thanks,
Diane





Re: Upgrading tbb

2021-12-23 Thread M. Zhou
Hi all,

I'm back.

I've just finished my final exams so I could do something during
the holiday. That TBB repository is still work-in-progress and
FTBFS from the master branch is something expected. I will finalize
it soon. Andreas said in previous posts that we prefer a faster
NEW queue process. I understand that but we cannot bypass NEW process
this time as upstream has bumped the SONAME. So I'm changing the
source name as well following the upstream since NEW is inevitable.

As for llvmlite, the latest upstream RC release v0.38.0rc1 seems
to support python 3.10 . Should I upload the RC release?

BTW, what else should I do? I've been out of sync from the mailing
list for a long while.

On Thu, 2021-12-23 at 10:58 +0100, Drew Parsons wrote:
> On 2021-12-23 10:24, Drew Parsons wrote:
> > On 2021-12-23 06:57, Andreas Tille wrote:
> > > Hi,
> > > 
> > > Am Wed, Dec 22, 2021 at 05:09:35PM -0800 schrieb Diane Trout:
> > > > On Wed, 2021-12-22 at 22:24 +0530, Nilesh Patra wrote:
> > > > > 
> > > > > Actually because of the current state of numba, several
> > > > > reverse
> > > > > depends are FTBFS so it's
> > > > > bit urgent to push. Apologies for getting on your nerves,
> > > > > though.
> > > > 
> > > > I tried. but numba needs tbb version >= 2021. I tried to update
> > > > tbb 
> > > > but
> > > > ran into problems trying to build it.
> > 
> > 
> > Diane is testing a python3.10-compatibility branch for us in numba.
> > 
> > At the same time numba upstream has released 0.55.0rc1 which
> > contains
> > their python3.10 fix.  Should we just jump straight to it (and not
> > wait for the final 0.55 release)?  I don't know how it goes with
> > tbb
> > though.
> 
> Actually I guess 0.55.0rc1 won't help so easily. It needs llvmlite 
> 0.38.0rc1, and we've only just got 0.37 packaged. numba is a kind of 
> ouroboros, can never get to the end of it.
> 
> Drew
> 



What should be moved to math-team (Re: Debian Math Team

2021-11-08 Thread M. Zhou
Hi,

At this point I have some doubt on "what should be moved to
math team." The borderline and the expected outcome are
not discussed in some specific cases.

In my understanding, domain-specific mathematical applications,
such as theorem prover, would be a good fit for the new team.
And this is not likely of interest by a larger range of audience.

However, as we know, mathematics is the underlying core of
many engineering and science fields. The critical mathematical
libraries such as BLAS and LAPACK should have the attention
to the whole science team, instead of limited attention of
a small team.

In brief, my personal opinion is: packages that are too important
and generic should be kept in science team as they may affect
any sub-area of science; packages are less likely used in
other sub-areas of science can be moved to smaller but dedicated
teams for better care.

The borderline should depend on the influence of a package,
and its expected usage.

On Sun, 2021-11-07 at 11:56 +, Torrance, Douglas wrote:
> 
> Would anyone be able to either grant me owner
> permissions, or alternatively transfer the following from debian-
> science to
> debian-math?
> 
> [...]
> fflas-ffpack

This just reminds me of BLAS and LAPACK.



Re: Debian Math Team

2021-11-08 Thread M. Zhou
Hi Ole,

On Tue, 2021-11-02 at 08:04 +0100, Ole Streicher wrote:
> Instead, I would suggest to keep (and improve) the Science Team
> policy,
> and then to have a *tiny* Math team policy, which could just be a
> 5..10-liner, like
> 
> > We inherit the Science Team policy, except:
> > * The maintainer field should be set to
> >   "Debian Math Team ".
> > * The VCS location is in the Salsa namespace
> >   https://salsa.debian.org/math-team/

I suggest the same. In practice the deep learning team directly
adopted science team policy, and ML-Policy is added on top of
it to address domain-specific issues.
Maybe I should find some time to write down the policy ...
explicitly.



Re: Debian Math Team

2021-11-08 Thread M. Zhou
Hi Anton,

My first impression is the same -- that may increase fragmentation.
But in fact, as long as the main contributors on these packages are
happy, why should we stop them :-)

Debian contributors are already scarce. If a new team help a group
of maintainers retain their enthusiasm, it will be good.

I think I have done something similar -- I splitted the deep learning
stuff from the science team umbrella into a dedicated team.
In this way people who have a specific interest will have a better
place to collaborate. Besides, deep learning team inherited the
science team policy, and ML-Policy handles some new problems.

Overall a new team should be good. As long as the maintainer permission
is given to every contributor. Or, it is suggested to directly
assign the maintainer access to the whole science team.

On Sat, 2021-10-30 at 01:55 +0200, Anton Gladky wrote:
> Hi Doug,
> 
> well, I think that it just increases a fragmentation. But it is up to
> you.
> 
> Best regards
> 
> Anton
> 
> Am Fr., 29. Okt. 2021 um 22:04 Uhr schrieb Torrance, Douglas
> :
> > 
> > During the Debian Science BoF at this year's DebConf, there was
> > some
> > discussion of creating a team devoted to packaging mathematical
> > software.
> > 
> > This seemed like a pretty good idea, so I figured that I'd go ahead
> > and
> > start working on getting it set up.  I've already created a Salsa
> > group [1]
> > and a team on the Debian Package Tracker [2].  If you're interested
> > in
> > joining, then you should be able to sign up at these links.
> > 
> > I figured next would be applying for a mailing list, putting
> > together a team policy, etc.  Any thoughts?
> > 
> > Doug
> > 
> > [1] https://salsa.debian.org/math-team
> > [2] https://tracker.debian.org/teams/math/
> 



Re: Datasets downloaded by scikit-learn as separate packages?

2021-09-21 Thread M. Zhou
On Mon, 2021-09-20 at 19:52 +0200, Christian Kastner wrote:
> > 
> > Or should we not build these jupyter notebooks for the -doc package?
> 
> I don't think anyone would stop you from packaging the datasets but to
> be honest, I think that would be overkill. The -doc package has a
> popcon
> of 93, and I would assume that (like me) most users of scikit-learn use
> upstream's online documentation directly.

Many machine learning-related packages require external datasets,
and the upstream usually provide APIs for the users to automatically
download them if they are really useful for a large number of audience.
I vote for "packaging a dataset is not necessary", and we may use
pytest marker to skip the tests requiring external data.

I refrained from uploading any datasets except for

 $ apt list dataset\*
 Listing... Done
 dataset-fashion-mnist/unstable,unstable,now

as it can be used as a universal sanity test dataset for any machine
learning tool sanity test dataset. (in academics, people use the
dataset named MNIST. the above Fashion-MNIST is an MIT-licensed
alternative).



Quick Poll: Debian to better support hardware acceleration?

2021-05-20 Thread M. Zhou
Hi folks,

---

Q: How far should Debian go along the way for supporting hardware
acceleration solutions like CUDA?

Choice 1: this game belongs to the big companies. we should offload
such burden to third-party providers such as Anaconda.
Choice 2: we may try to provide what the users need.
Choice 3: 

---

As we know, hardware acceleration means a lot to scientific computing,
and I believe a number of debian users use solutions like CUDA, ROCm,
or even SYCL. And the most prevalent solution seems to be CUDA.
Recall that anaconda might be one of the simplest ways to get the cuda
version of tensorflow and pytorch, etc. So I just want to hear your
opinions on how far we should go along this direction.

If we really want to go further, then a GPU server should be available
in our infrastructure to facilitate development. Although license is
another considerable blocker, this can be discussed later.

Thanks!



Re: double handling of BLAS alternatives: blas.so vs blas.so.3

2021-05-06 Thread M. Zhou
On Wed, 2021-05-05 at 14:57 +0200, Drew Parsons wrote:
> 
> 
> The "disaster" is that they can point to different implementations.
> So 
> if you check the symlinks for libblas.so, you think you're using one 
> implementation, while the different libblas.so.3 means  you're
> actually 
> running against another implementation.  So it's not a build disaster
> as 
> such, since they keep ABI compatibility.  But it is a system
> maintenance 
> disaster in the sense that it makes it harder than it needs to be to 
> keep track of which BLAS is actually running on the system.

A sensible way to check which implementation is actually used is to
follow the logic of the dynamic linker (see ld.so(8) and elf(5)).
It loads libraries specified by the ELF header as NEEDED (readelf)
or DT_NEEDED (ld.so(8), elf(5)).

As specifying `libblas.so` without SOVERSION would very likely trigger
a lintian Error, I think all blas/lapack reverse dependencies won't
care about at which implementation on earth is symlink libblas.so
pointing. I think this applies to all ELF binaries in the Debian
archive.

Of course, FFI/dlopen is not the case to discuss here.

> > The libblas.so alternative only matters when you’re doing static
> > linking, since that alternative also governs libblas.a. But we
> > don’t
> > use static linking much in Debian. And for someone who is doing
> > static
> > linking, the libblas.so.3 alternative is irrelevant anyways, so
> > there
> > is no risk of confusion.
> 
> OK, if libblas.so controls libblas.a, then it's more meaningful. 
> Though 
> still odd to have static symbols handled by one implementation, and 
> dynamic symbols by another.  Are there any cases where that would be 
> desirable?

There may be room for further improvement. But the first question
came up in my mind is: how to manage differnt header files (e.g.
cblas.h) and static libs exported by different implementations
without introducing further confusions?

For example, BLAS implemtations (except for BLIS and MKL) in our
archive provide libblas.a. When we switch BLAS to BLIS or MKL,
that libblas.a symlink exposed under /usr/lib// will
be automatically hidden, and the linker won't find it from the
default search path. So users won't accidentally make 

Without the current alternatives mechanism, can we find a better
solution to avoid the situation where the libblas.a from the
reference implementation and the cblas.h header from MKL are
exposed at the same time?



Re: double handling of BLAS alternatives: blas.so vs blas.so.3

2021-05-03 Thread M. Zhou
Hi Drew,

On Sun, 2021-05-02 at 13:50 +0200, Drew Parsons wrote:
> Mo Zhou did the good work of setting up an alternatives framework for
> our BLAS libraries, which is great. We can select between OpenBLAS, 
> BLIS, whatever is best for the local system (even intel-mkl).

Actually that alternative system is NOT my original work -- I just
refined it with BLAS64 alternatives along with some minor enhancements
IIRC. Only Gentoo's initial version of BLAS/LAPACK switching is
my original work.

> But the framework was set up with double handling of the basic 
> libblas.so alternative symlink in each /usr/lib/ directory.  
> There's one for the unversioned libblas.so, and a separate one for 
> libblas.so.3.
> 
> This means it's possible for the libblas.so alternative to point at 
> blis-pthread/libblas.so, for instance, while libblas.so.3 points at 
> openblas-pthread/libblas.so.3. It seems to me that's a potential for 
> disaster.

The separation is intended for the build/run-time.

When we build packages based on BLAS/LAPACK, it requires a virtual
build-dependency libblas.so which can be filled by libblas-dev, etc.
During the linking stage, that alternative would be resolved into,
e.g., libblas-dev::.../libblas.so -> libblas3::.../libblas.so.3,
and the resulting ELF will NEED (readelf) a libblas.so.3 .
If you encounter an ELF binary that NEED libblas.so without
specifying the SOVERSION, this ELF binary is exactly a true disaster.

> It seems inconsistent with the way the unversioned library is handled
> in 
> "normal" libraries, where it simply points at the latest versioned 
> library.  Shouldn't the blas framework be doing the same thing, with
> a 
> simple local symlink  /usr/lib//libblas.so -> 
> /usr/lib//libblas.so.3 ?  Then the blas alternatives would only
> need to handle libblas.so.3.

> /usr/lib//libblas.so -> /usr/lib//libblas.so.3 could be 
> handled by a libblas-common-dev package that all blas dev packages 
> depend on (since libblas-dev is the reference implementation. 
> Actually 
> maybe libblas-dev should be retooled as the common package, and the 
> reference implementation made explicit as libblas-reference-dev. That
> could help make it more clear that you probably do not want to
> install 
> libblas-reference-dev for actual numerical work).
> 
> What is the motivation for setting up libblas.so as an alternative 
> separate from libblas.so.3 ?  It makes BLAS installation management
> more 
> difficult, I think.

Having been away from Debian for a while, currently I do not fully
recall the reasons ... but actually the libblas.so alternative
is also associated with header files and static libs that differ
across providers.



Re: python-cython-blis package

2021-03-03 Thread M. Zhou
On Thu, 2021-03-04 at 08:09 +0100, Andreas Tille wrote:
> 
> I also intend to negotiate this again.  While the copyright holders
> are
> 
>  2018 The University of Texas at Austin
>  2016 Hewlett Packard Enterprise Development LP
>  2018 Advanced Micro Devices, Inc.
>  2019 ExplosionAI GmbH

Part of the code in cython-blis comes from src:blis
https://github.com/flame/blis
And the copyright holders are largely inherited from src:blis

> the discussion was done with a single developer - well, looking at
> the

That single developer is a core contributor of the spaCy stack,
and a core member of "2019 ExplosionAI GmbH" IIRC



Re: python-cython-blis package

2021-03-03 Thread M. Zhou
For your information,

the upstream holds a very negative attitude towards debian packaging.
https://github.com/explosion/cython-blis/issues/32

CC'ed pabs.

On Wed, 2021-03-03 at 17:51 +0100, Andreas Tille wrote:
> On Wed, Mar 03, 2021 at 05:26:11PM +0100, Gard Spreemann wrote:
> > 
> > Andreas Tille  writes:
> > 
> > > [1] https://salsa.debian.org/debian-science/python-cython-blis
> > 
> > Hi,
> > 
> > I think this is a typo. It should be
> > 
> >  https://salsa.debian.org/science-team/python-cython-blis
> > 
> > right?
> 
> Sure.  Please always watch me closely. ;-)
> 
> Kind regards
> 
>  Andreas.
> 
> 




dwz: libsleef3.debug: .debug_line reference above end of section

2019-01-12 Thread M. Zhou
Hi folks,

I have completely no idea about what this error message is supposed to
mean and how it can happen (the results surprised me since there are
only domestic changes since the last upload):

|dh_dwz -a
| dwz: debian/libsleef3/usr/lib/debug/.dwz/s390x-linux-gnu/libsleef3.debug: 
.debug_line reference above end of section
| dh_dwz: dwz -q 
-mdebian/libsleef3/usr/lib/debug/.dwz/s390x-linux-gnu/libsleef3.debug 
-M/usr/lib/debug/.dwz/s390x-linux-gnu/libsleef3.debug -- 
debian/libsleef3/usr/lib/s390x-linux-gnu/libsleef.so.3.3.1 
debian/libsleef3/usr/lib/s390x-linux-gnu/libsleefgnuabi.so.3.3 returned exit 
code 1
| make: *** [debian/rules:5: binary-arch] Error 2

This failure has been found on mips, mipsel, mips64el, s390x.

  https://buildd.debian.org/status/package.php?p=sleef

Does anybody know what such error message mean, and how I can fix it?

Thanks in advance.