Re: dh_install by file suffix

2023-07-17 Thread Ole Streicher

Hi Niels,

On 16.07.23 09:32, Niels Thykier wrote:
The "iraf" source package needs to divide these files into user 
related files (for the "iraf" and "iraf-noao" packages) and 
development related files (for "iraf-dev" and "iraf-noao-dev"). The 
problem is now, that the division is (mainly) by extension:

[...]
Alternatively, you can just make the .install executable in general and 
have it output what you want. That option also works.


I think this is the best solution; what I will do is and executable

- debian/iraf.install -
#!/bin/sh

cat <

Re: dh_install by file suffix

2023-07-15 Thread Ole Streicher

Hi again,

I think youe way could be to put the file list into a variable in 
d/rules, and expand the list the .install, like:


-- debian/iraf.install -
etc/iraf/
usr/lib/iraf/bin/ecl.e
[... other fixed content]
${env:IRAF_FILES}
8<--

--- debian/rules ---

override_dh_install:
IRAF_FILES=$$(cd debian/tmp; \
find usr/lib/iraf/pkg usr/lib/iraf/unix/hlib \
 -name \*.hlp \
  -o -name \*.hd \
  [...] \
  -o -name \*.fits) \
dh_install

8<--

where the same procedure however would required for all four binary 
packages. This does not look very nice, and also according to the 
debhelper manpage, one can only expand to 4096 chars (I'd need ~40,000).


Any better idea?

Best

Ole


On 15.07.23 21:01, Ole Streicher wrote:

Hi,

I am upgrading one of my packages (iraf) to a new version. The new 
version comes with a "make install", which installs everything under 
/usr/lib/iraf/ (and some other places).


The "iraf" source package needs to divide these files into user related 
files (for the "iraf" and "iraf-noao" packages) and development related 
files (for "iraf-dev" and "iraf-noao-dev"). The problem is now, that the 
division is (mainly) by extension:


- *.cl, *.hd, *.men, *.par (... and some other extensions) should go to
   the user packages

- *.a, *.h should go to the development packages

(the "iraf" and "iraf-noao" package differ mainly by that "iraf" 
collects them in the pkg/ subdir, and "iraf-noao" in the noao subdir).


The main question here is: how can I do a dh_install selective by file 
suffix? Otherwise, I would need to list the (~1000) files in the 
"install" files, which is not very robust.


Cheers

Ole




dh_install by file suffix

2023-07-15 Thread Ole Streicher

Hi,

I am upgrading one of my packages (iraf) to a new version. The new 
version comes with a "make install", which installs everything under 
/usr/lib/iraf/ (and some other places).


The "iraf" source package needs to divide these files into user related 
files (for the "iraf" and "iraf-noao" packages) and development related 
files (for "iraf-dev" and "iraf-noao-dev"). The problem is now, that the 
division is (mainly) by extension:


- *.cl, *.hd, *.men, *.par (... and some other extensions) should go to
  the user packages

- *.a, *.h should go to the development packages

(the "iraf" and "iraf-noao" package differ mainly by that "iraf" 
collects them in the pkg/ subdir, and "iraf-noao" in the noao subdir).


The main question here is: how can I do a dh_install selective by file 
suffix? Otherwise, I would need to list the (~1000) files in the 
"install" files, which is not very robust.


Cheers

Ole



Re: lintian errors packaging Barry's Emacs

2022-12-27 Thread Ole Streicher
Hi Santiago,

Santiago Vila  writes:
> If you don't have deb-src lines, they are the same as the usual deb lines
> except that they begin with deb-src.

Just curious: why are the deb line not used by default here?

Best

Ole



Re: lintian errors packaging Barry's Emacs

2022-12-27 Thread Ole Streicher
Hi Barry,

Barry Scott  writes:
> I am build my first Debian package for Barry's Emacs
> (https:://barrys-emacs.org)

Aside from Santiagos technical tips: If you really want to contribute
your package to the Debian distribution, you should also have a few
other things in mind:

* Your package should come with a proper DFGS compliant [1] license. Your
  Github upstream package [2] doesn't have one, and it would be useful
  (not only for Debian packaging) to add one there.

* I would recommend to follow the usual procedures here. Specifically,
  you should open an "Intend To Package" (ITP) bug [3] to indicate your
  packaging efforts.

* The target distribution for the packaging is "unstable" (sid). From
  there it migrates to the Debian distribution. It also migrates to
  Ubuntu, Mint, and all the other derivative distributions.

* The  packaging   efforts  should   be  separated  from   the  software
  development  itself and  usually happens  on the  Salsa Gitlab  server
  [4]. I'd strongly  recommend to allow team maintainance,  to lower the
  barrier of packaging-related contributions.

Best regards

Ole


[1] https://www.debian.org/social_contract#guidelines
https://wiki.debian.org/DFSGLicenses
[2] https://github.com/barry-scott/BarrysEmacs
[3] https://wiki.debian.org/ITP
[4] https://salsa.debian.org



Problem with LD_PRELOAD in cowbuilder

2022-11-28 Thread Ole Streicher
Hi,

My cowbuilder setup has been working properly for many years, however
now I have a strange problem with the cowbuilder in one (new) cmake
project:

When building this project, there appear many lines

ERROR: ld.so: object '.' from LD_PRELOAD cannot be preloaded (cannot
read file data): ignored.

which is unclear where they come from. These lines disturb the output on
many places, which is checked by tests (and these therefore fail).

The LD_PRELOAD seems to be set from the build framework (cowbuilder?);
when I do a

%:
unset LD_PRELOAD; dh $@ --buildsystem=cmake --

in debian/rules, the package builds fine. However, when I do a

%:
echo $@ "LD_PRELOAD=" '>>>'$(LD_PRELOAD)'<<<'
dh $@ --buildsystem=cmake --

I can see the following sequence:

clean LD_PRELOAD= >>>libfakeroot-sysv.so<<<
clean LD_PRELOAD= >>><<<
binary LD_PRELOAD= >>><<<
ERROR: ld.so: object '.' from LD_PRELOAD cannot be preloaded (cannot read file 
data): ignored.
[...]

The package itself does not mention LD_PRELOAD at all. So, I have
finally no idea where this could happen. I already removed all eatmydata
mentions to make sure it is not the cause here. It seems that there is a
subtle difference between an unset LD_PRELOAD and an unset LD_PRELOAD,
and debhelper seem to make it empty instead of unsetting it.

Does anyone have an idea? This is not even my own package, but one the I
am mentoring - https://salsa.debian.org/debian-astro-team/elements

Best regards

Ole



Re: Environment variable for package base dir

2022-08-14 Thread Ole Streicher
Gianfranco Costamagna  writes:
> $CURDIR? 

Yes, is what I also finally found to be working.

Cheers

Ole



Environment variable for package base dir

2022-08-11 Thread Ole Streicher
Hi,

one of my packages

https://salsa.debian.org/debian-astro-team/sep

requires to specify a path relative to the package base dir (the path to
a shared library), which is important for build and for testing. I,
Python, it is specified in setup.py.

I solved this the following way:

d/rules:
export BUILD_ROOT=$(shell pwd)

setup.py (patched):
buildroot = os.environ.get('BUILD_ROOT', '.')

This works nicely locally (cowbuilder) and in Salsa. However, on the
buildds, the build root is set empty and the build fails.

What is the normal way to get the package build root?

Best

Ole



Re: combining pybuild and cmake

2022-08-07 Thread Ole Streicher
Hi Jeremy,

Jeremy Sowden  writes:
> Don't know whether it the proper way to do it, but this:
>
>   $ cat debian/rules
>   #!/usr/bin/make -f
>   #export DH_VERBOSE=1
>
>   %:
>   dh $@ --with python3
>
>   override_dh_auto_clean:
>   dh_auto_clean -O--buildsystem=cmake
>   dh_auto_clean -O--buildsystem=pybuild
[...]
> appears to put the right files in debian/tmp [...]

That solves my problem. Thank you very much!

Cheers

Ole



combining pybuild and cmake

2022-08-07 Thread Ole Streicher
Hi,

I am working on a package (https://salsa.debian.org/debian-astro-team/sep),
that needs a two-stage build: First, a library is built with cmake/make,
and then a Python (wrapper) package is built the usual way.

I tried to just have two commands in d/rules:

---8<--
%:
dh $@ --buildsystem=cmake
PYBUILD_NAME=sep dh $@ --with python3 --buildsystem=pybuild
---8<--

however this seems to completely confuse the whole build system: it
somehow re-executes the cmake build in the second step, doesn't call
setup.py at all, and then doesn't find files to put into the package.

What is the proper way to do this?

Best regards

Ole



Re: Package upload failed only due GPG expiration?

2022-02-04 Thread Ole Streicher
Hi Filip,

otherwise, I would just sponsor you upload(s) in the meantime. Justping
me if needed.

Cheers

Ole

Filip Hroch  writes:
> Hi Sebastian, and Andrey,
>
> thank you very much for that help. I decided to practise my patience,
> there's no hurry for Fitspng upload.
>
>
> Sebastian Ramacher  writes:
>
>> On 2022-02-02 16:20:15 +0500, Andrey Rahmatullin wrote:
>>> On Wed, Feb 02, 2022 at 11:50:47AM +0100, Filip Hroch wrote:
> ...
>>> When a valid signature is not found the uploader indeed doesn't get
>>> any
>>> notifications.
>>> > as well as.
>>> Yes, it's because of key expiration.
>>> Unfortunately I have no idea anymore which is the source of key
>>> data for
>>> the upload processing as that's inconsitent and I don't know if
>>> it's
>>> documented anywhere.
>>
>> From https://keyring.debian.org/
>>
>> "You can check the result with --recv-keys, but note it can take up
>> to 15
>> minutes for your submission to be processed. Your updated key will
>> then
>> be included into the active keyring in our next keyring push (which
>> happens approx. monthly)."
>>
>
> I checked validity of the key on keyring.debian.org GPG server
> via an independent account, during Sunday already. The authoritative
> source of keys is the active keyring, I think.
>
> All the situation is my fail. I has prepare the upload since begin
> of January, so I has prolonged the key approx. two weaks ago, but
> notified only Ubuntu's Hockey puck, and forget of the Debian's master.
>
> Thank you very much,
> FH
> --
> F. Hroch , Masaryk University,
> Dept. of theor. physics and astrophysics, Brno, Moravia, CZ



Re: Do autopkgtest for non-listed architectures prevent migration?

2022-01-24 Thread Ole Streicher
Hi Helge,

Helge Deller  writes:
> That's why I'm saying that you shouldn't exclude by default a *specific*
> architecture. The problem is often not bound to that architecture, but by
> the specifics which define that architecture (endianess, 32/64-bit, ...).

this brings me to the point why we don't have a way to require these
specifics in a canonical way, i.e. why we don't have (pseudo) packages
for endianess, word width etc.

For example I observe more and more (astronomy/science) packages to be
64-bit only by design (or upstream decision), and there is no way to
clearly specify this as a build condition. Just ignoring this and let
them fail doesn't look very smart.

Best

Ole



Re: Do autopkgtest for non-listed architectures prevent migration?

2022-01-24 Thread Ole Streicher
Helge Deller  writes:
> On 1/24/22 09:10, Ole Streicher wrote:
>> Wookey  writes:
>>> If the package builds on the 32bit arches then I would advise that you
>>> let it build.  We always try to build for all arches in debian and it
>>> is very annoying if you have say an armhf machine and something is not
>>> available just because there was some test failure so upstream simply
>>> excluded builds completely. Packges should only be excluded on an arch
>>> if they are known not to build or to be genuinely useless there.
>>
>> I would disagree here: If we can't support a certain package on a
>> platform, then we shouldn't build it there. If neither upstream nor the
>> Debian maintainer is going to support armhf, then it should not be built.
>
> I'm not sure if there is a misunderstanding here.
> I think every package (unless it doesn't fit to a platform like a boot loader,
> or the target architecture is really not meant for that package)
> should be *built*. It may fail tests, in which case it should still be built,
> but the build should be marked failed and as such no *binary* package
> should be produced and uploaded.
> But since it was built, platform maintainers may see it, can check the
> build logs and may help to fix.
>
> The worst thing for arches is, if a package is being *excluded* from building
> on certain arches just because there was a build- or test error.
> That way nobody will notice and there will never someone look into it.

Users of the platform may request the package if they need it (and they
are relevant people for us). And attempting to build the package on such
a platform is as easy as adding the architecture to d/control for the
user. And porters can also just check which packages are not built on a
platform.

Best

Ole



Re: Do autopkgtest for non-listed architectures prevent migration?

2022-01-24 Thread Ole Streicher
Wookey  writes:
> If the package builds on the 32bit arches then I would advise that you
> let it build.  We always try to build for all arches in debian and it
> is very annoying if you have say an armhf machine and something is not
> available just because there was some test failure so upstream simply
> excluded builds completely. Packges should only be excluded on an arch
> if they are known not to build or to be genuinely useless there.

I would disagree here: If we can't support a certain package on a
platform, then we shouldn't build it there. If neither upstream nor the
Debian maintainer is going to support armhf, then it should not be built.

For example, I have a package (iraf) that builds fine on big-endian
systems but some tests fail there. I (being both upstream and the Debian
maintainer) am not going for a bug hunt since I don't see a use in it,
but I know that the existing bug may make some astronomical calculations
(unnoticed) wrong. It is better not not have that package than a buggy
package. If someone needs it, they are free to fix the problems so that
we include it but unless nobody cares I will not deliver a known-buggy
package by just disabling the failing tests.

> If the package is available then maybe someone who cares will fix
> it. If it isn't they probably won't even try. A note in the
> Debian.README about this known issue would be helpful.

This is only true if the bug is noticed, which is not always the
case.

Best

Ole



Re: Uscan with gitlab and user provided tarball

2021-10-19 Thread Ole Streicher
Paul Wise  writes:
> On Tue, 2021-10-12 at 14:43 +0200, Ole Streicher wrote:
>
>> https://gitlab.com/aroffringa/wsclean
>> 
>> He uses git submodules
>
> These all look like embedded code copies, so I suggest packaging them
> separately instead of including them the wsclean source tarball.

At least some of them are not really suitable to be packaged as separate
public packages. The two submodules in question (AOcommon, schaapcommon)
are personal utility functions of the upstream authors which are not
intended for general use.

I remember that a few years ago, I had some packages which all used a
"GreatCMakeCookoff" (or so) package, a collection of (not really
re-usable) tools to extend CMake. When trying to ITP this, the common
wisdom here was that this should not be packaged so that others do not
start depend on a bad-quality package.

To me, this looks similar: These are not for the public, as their
interface may change wildly, they are not even tagged in git. Even if
they are re-used in another package (wsclean), there is no guarantee for
me that both can be built on the same commit of the utility
repositories. There is zero gain in separating them, and only causing
lots of trouble.

I really see these two submodules as integral part of the source and
would like to continue packaging it this way. Which keeps the question
how to effecitvely (with the help of upstream) detect and download the
complete source.

Best regards

Ole



Uscan with gitlab and user provided tarball

2021-10-12 Thread Ole Streicher
Hi,

the upstream of one of my packages recently moved from sourceforge to
Gitlab:

https://gitlab.com/aroffringa/wsclean

He uses git submodules, and this makes the automatically created tarball
incomplete. For my convenience, he created (and hopefully will continue
so) a manual asset, which is linked on the Releases page

https://gitlab.com/aroffringa/wsclean/-/releases/

Unfortunately, the file URL does not have a canonical name, as seen on
the HTML snippet:

https://gitlab.com/aroffringa/wsclean/-/package_files/15813079/download;
 class="…">
 …
  wsclean-v3.0.tar.bz2


This HTML is also generated by a script, so not directly downloadable.

Since Gitlab is one of the standard providers, there may be a good
solution to get the correct tarball? Or is there a better way for
upstream to provide a complete tarball than the one chosen here?

Best regards

Ole



Detect whether a debuild is running

2020-01-31 Thread Ole Streicher
Hi,

we got a bug report about network access during build (#949464) that was
caused by sphinxdoc. The root of this is a list maintained in another
package, python3-sphinx-astropy (sphinx_astropy/conf/v1.py):

intersphinx_mapping = {
'python': ('https://docs.python.org/3/',
   (None, 'http://data.astropy.org/intersphinx/python3.inv')),
[...]}

Since this is a nice central place to fix the problem by disabling the
inventory (or using a local one), I would like to make this dependent of
whether it is actually used to build a Debian package. Something like:

if 'DH_INTERNAL_BUILDFLAGS' not in os.environ:
intersphinx_mapping = {
'python': ('https://docs.python.org/3/',
   (None, 'http://data.astropy.org/intersphinx/python3.inv')),
[...]}

However, I am not sure what the canonical way is to detect whether a
debian build runs. How should one solve this?

Best regards

Ole



Re: Symbols files for C++ libraries

2019-12-06 Thread Ole Streicher
Andrey Rahmatullin  writes:
> On Fri, Dec 06, 2019 at 05:37:25PM +0100, Ole Streicher wrote:
>> for the "casacore" package (written in C++), we wanted to introduce
>> symbols files for the shared libraries it produces. However, this
>> somehow does not work, as they seem to depend on the architecture and/or
>> the C++ compiler version:
>> 
>> https://buildd.debian.org/status/logs.php?pkg=casacore=3.2.1-1
>> 
>> shows the build failures for the same package that compiled well on
>> x86_64 caused by differences in the symbols table.
>> 
>> How should one handle this?
> My favorite answer for that is in the Policy section 8.6:
>
> """
> For most C libraries, the additional detail required by symbols files is
> not too difficult to maintain. However, maintaining exhaustive symbols
> information for a C++ library can be quite onerous, so shlibs files may be
> more appropriate for most C++ libraries. 
> """

OK, I think I will just omit symbols file. Casacore anyway changes the
ABI quite often, making these files of limited use only.

Thank you

Ole



Symbols files for C++ libraries

2019-12-06 Thread Ole Streicher
Hi,

for the "casacore" package (written in C++), we wanted to introduce
symbols files for the shared libraries it produces. However, this
somehow does not work, as they seem to depend on the architecture and/or
the C++ compiler version:

https://buildd.debian.org/status/logs.php?pkg=casacore=3.2.1-1

shows the build failures for the same package that compiled well on
x86_64 caused by differences in the symbols table.

How should one handle this?

Best regards

Ole



Package name change problems

2019-11-21 Thread Ole Streicher
Hi,

I am in the process to rename one of my packages (source and binary
packages), from "sextractor" to "source-extractor" (see #941466 for
rationale).

For this, I followed the Wiki; in d/control:

Package: source-extractor
Architecture: any
Depends: ${misc:Depends}, ${shlibs:Depends}
Breaks: sextractor (<< 2.25.0+ds-1~)
Replaces: sextractor (<< 2.25.0+ds-1~)
Description: [...]

Package: sextractor
Architecture: all
Depends: source-extractor, ${misc:Depends}
Section: oldlibs
Description: [...]

However, what happens is that source-extractor is marked as "installed
as dependency of sextractor", and when sextractor is going removed,
source-extractor is marked for autoremoval (resp. removed directly when
using aptitude, #945192).

How can I handle this correctly?

Best regards

Ole




dwz failures

2019-07-15 Thread Ole Streicher
Hi,

I have a larger package (eso-midas) that built successfully over the
last years. However, a new binNMU failed last night on
mips/mipsel/mips64 with the cryptic error message

   dh_dwz -a
dh_dwz: dwz -q 
-mdebian/eso-midas/usr/lib/debug/.dwz/mipsel-linux-gnu/eso-midas.debug 
-M/usr/lib/deb[...lengthy argument list...] returned exit code 1
make: *** [debian/rules:25: binary-arch] Error 255

full log: 
https://buildd.debian.org/status/fetch.php?pkg=eso-midas=mips=19.02pl1.0-1%2Bb1=1563134252=0

which I do not understand.

Is this a bug in dwz? Can I just disable dh_dwz?

Cheers

Ole



Bug#919413: RFS: doxygen/1.8.15-1 [ITA]

2019-02-04 Thread Ole Streicher
Control: severity 921295 important
Control: severity -1 normal

(setting severity of RFS back to normal)

https://release.debian.org/buster/freeze_policy.html

> Transition Freeze
> Starting 2019-01-12, new transitions and large/disruptive changes are no 
> longer acceptable for buster.

I am wondering whether this applies here, given the large number of
packages which don't compile.

Best

Ole



Re: Help for SIGSEGV in test suite needed when built with gcc 8.2 what works nicely with gcc 6.3

2019-01-09 Thread Ole Streicher
Hi Andreas,

one thing I usually do in such cases is to rebuild the package adding
"-fsanitize=address -O0" flags (optimization just to understand better
what happens in the source). This switches the address sanitizer on
. This can
test if a local variable is accidently overwritten (by an off-by-one
error or similar). Often it finds many more bugs which one can turn
upstream into bonus points...

Otherwise I see no other chance than to go through the debugger and see
where the strange address was set. 0x7 however sounds that somewhere a
small integer was assigned to the pointer, so I would try the sanitizing
stuff first.

Cheers

Ole

Andreas Tille  writes:
> Hi,
>
> as reported in bug #907624 ffindex autopkgtest fails with SIGSEGV in sid
> and buster.  I've tested in stretch (gcc 6.3) and the code works fine.
> I've reported upstream[1] the results of my gdb session where I was able
> to find the exact code line[2] where the SIGSEGV is thrown.  It turns out
> that the elements of a structure are not accessible:
>
>(gdb) print entry->offset
>Cannot access memory at address 0x7
>
> (full gdb log under [1] or in the bug log).
>
> In fact I tried in some more detailed debugging that any attempt to
> access one of the structure elements even for instance only injecting
> something like 
>
>if ( !entry->offset ) {
>
> in line 554 will trigger the SIGSEGV.  The values of the structure are
> set in line 350[3] and are OK there.  The funktion that contains the
> failing line is action() [4] and called via a pointer to this function
> in line 563[5] (I admit I have no real idea why this pointer to a
> function should be needed.  Its the only function that is used in this
> place and IMHO only adds an extra layer of complexity.)
>
> The structure is declared in the header file[6].
>
> I admit I fail to see why the code works under stretch with gcc 6.3
> but fails with gcc 8.2.
>
> Any idea?
>
> Kind regards
>
>Andreas.
>
>
> [1] https://github.com/soedinglab/ffindex_soedinglab/issues/7
> [2] https://salsa.debian.org/med-team/ffindex/blob/master/src/ffindex.c#L554
> [3] https://salsa.debian.org/med-team/ffindex/blob/master/src/ffindex.c#L350
> [4] https://salsa.debian.org/med-team/ffindex/blob/master/src/ffindex.c#L541
> [5] https://salsa.debian.org/med-team/ffindex/blob/master/src/ffindex.c#L563
> [6] https://salsa.debian.org/med-team/ffindex/blob/master/src/ffindex.h#L30



Re: Removing a package from unstable

2019-01-06 Thread Ole Streicher
Mattia Rizzolo  writes:
> On Sat, Jan 05, 2019 at 09:24:06PM +0100, Ole Streicher wrote:
>> I have a source package (python-astropy) that I now want to remove from
>> unstable. I took care that all reverse dependencies were removed now in
>> recent uploads. As suggested in [1], I first issued
>> 
>> $ ssh mirror.ftp-master.debian.org "dak rm -Rn python-astropy"
>> 
>> to see whether it would run without errors. The output is however:
> […]
>> In total, more than 50 mpackages are listed. Many of them because of the
>> python3-astropy (build) dependency in hurd (which is unavoidable to
>> break, but that is not a release platform anyway); but also a lot of old
>> cruft. I though that this would be removed automatically?
>
> cruft is not automatically removed as long as it would break stuff,
> like in this case…
> You say "unavoidable", but to me it seems:
>  * hurd is blocked because src:astropy doesn't build simply because
>src:python-psutil is not building which is very simply because of
>#676450 which has a patch for 6 years and is team maintained even!
>  * kbsd is not building because of a known issue in src:python3.7 that
>misdetects the avalability of sem_open() on kbsd, but alas the kbsd
>porters are too few and despute knowing the issue they can't work on
>them; however I've also been assured that it's not that hard to fix,
>so I guess somebody caring enough could try spending some time on
>this (in which case, let me point you to James Clark, he looked a bit
>at the issue in the past, he could tell you where to look).

That is a different thing: once the dependencies on Hurd are fixed, you
get python3-astropy back on that platform, independently whether
python-astropy was removed or not. If the dependencies remain unfixed, I
will remove python-astropy anyway at some point.

>> So what is the correct way now to get the package removed?
>
> You could go ahead on the bug report, listing the rdeps that you
> investigated and stating that you are willing to break them.
> However, looking at how many there are, I think that would be rude and
> as a supporter of the ports project I try my best to fix such issues
> before going the way of breaking the rdeps.  At least the hurd one looks
> incredibly trivial to deal with from a quick glance.

All these rdeps would come back automatically once someone fixes the
dependencies. And I do not assume any user of
Hurd-unstable-python3-astropy. But if you fix it, I can wait a while
before requesting removal. Is a week enough?

> As I stated, removing from testing looks much simpler

But it would happen automatically once the package is not in unstable,
right?

Cheers

Ole



Removing a package from unstable

2019-01-05 Thread Ole Streicher
Hi,

I have a source package (python-astropy) that I now want to remove from
unstable. I took care that all reverse dependencies were removed now in
recent uploads. As suggested in [1], I first issued

$ ssh mirror.ftp-master.debian.org "dak rm -Rn python-astropy"

to see whether it would run without errors. The output is however:

---8<--
Will remove the following packages from unstable:

astropy-utils |1.2.1-1 | all
python-astropy |1.2.1-1 | source
python-astropy |2.0.9-1 | source, amd64, arm64, armel, armhf, hurd-i386, 
i386, kfreebsd-amd64, kfreebsd-i386, mips, mips64el, mipsel, ppc64el, s390x
python-astropy-doc |1.2.1-1 | all
python3-astropy | 1.2.1-1+b1 | hurd-i386

Maintainer: Debian Astronomy Maintainers 


--- Reason ---

--

Checking reverse dependencies...
# Broken Depends:
aplpy: python-aplpy
   python3-aplpy
astlib: python3-astlib
[...]

# Broken Build-Depends:
aplpy: python3-astropy
astrodendro: python3-astropy (>= 0.2.0)
[...]
pyregion: python-astropy
  python3-astropy
[...]

Dependency problem found.
---8<--

In total, more than 50 mpackages are listed. Many of them because of the
python3-astropy (build) dependency in hurd (which is unavoidable to
break, but that is not a release platform anyway); but also a lot of old
cruft. I though that this would be removed automatically?

So what is the correct way now to get the package removed?

Best regards

Ole

[1] https://wiki.debian.org/ftpmaster_Removals



Arch:all package dependencies

2018-01-25 Thread Ole Streicher
Hi,

I recently created a new binary package "iraf-wcstools", which is
Arch:all, but depends on architecture dependent packages. Specifically,
it depends on "iraf", which is available only on selected
architectures (f.e. not on s390x):

Package: iraf-wcstools
Architecture: all
Depends: iraf, [...]

How do I make now "iraf-wcstools" migrate? I get as migration excuse:

iraf-wcstools/mips unsatisfiable Depends: iraf
iraf-wcstools/mips64el unsatisfiable Depends: iraf
iraf-wcstools/ppc64el unsatisfiable Depends: iraf
iraf-wcstools/s390x unsatisfiable Depends: iraf

which is all correct, but shouldn't be an excuse to block the migration,
right?

Best regards

Ole



Splitting source package into two

2018-01-11 Thread Ole Streicher
Hi,

I am the maintainer of the "python-astropy" package, that currently
creates packages for both Python 2 and Python 3. Both packages have a
number of reverse dependencies.

Recently, upstream announced a new version 3.0 of astropy, which
supports Python 3 only, and I want to have a smooth migration path. I
thought of a temporary package split: create a new source package
"astropy" that inherits of the current python-astropy package, but only
builds python3-astropy (and the utils + doc, which depend on
python3-astropy), and update this to version 3.0. Then I would remove
these binary packages from the python-astropy package.

The question is now: what I have to do in which order to do that? When I
upload a new "astropy" package, it offers the same Python 3 package as
the (still existing) python-astropy package, but with a higher
version. Is this a problem? Or need I to upload an updated
python-astropy package (with the Python 3 content removed) first? How
should one then handle their reverse dependencies?

(The question whether it is useful to have both python 2 and python 3 is
discussed in the d-python mailing list.)

Best regards

Ole



Re: Dependencies across architectures

2018-01-07 Thread Ole Streicher
Hi Paul,

Paul Wise <p...@debian.org> writes:
> On Sat, Jan 6, 2018 at 5:43 PM, Ole Streicher wrote:
>> "iraf" exists only on selected architectures due to some required
>> assembler code for each arch and problems with big endian.
> There could be a fallback in C for arches with no assembler yet
> and any non-baseline instructions should be detected at runtime.

Unfortunately, this is impossible: the assembler code creates a kind of
sigsetjmp() (with its own calling interface) for Fortran 77. This cannot
be simply remodelled in C. In principle, one could re-implement this
with the libunwind library (see [1]), but since glibc scrambles stack
information since some time, this does not work anymore.

If you have a portable solution, share it with me :-)

> Upstream should fix the code to deal with endianness correctly.
> Please file bugs upstream about these if you didn't already.

Upstream is difficult for this package: the package has no new upstream
version since five years and the communication is difficult. Usually,
this would count as "dead", but the package has quite some importance
for the astronomy community, and therefore I decided to create a
temporary fork, also for other downstreams (Fedora, Conda). The package
has ~750.000 LOC, so all I can do is to keep it working as it is. Big
endian was there at some point (10 years ago) on 32 bit, but they never
had a 64-big big endian release; so unless someone really puts some
efforts in, this will not happen (s390x).

>> From the description of "Multi-Arch: foreign" I would expect that this
>> allows the dependency resolved by using another architecture. However,
>> piuparts (and the migration excuses) claim a missing dependency on the
>> archs not supported by IRAF.
>
> piuparts.d.o only tests amd64 at this stage, could you quote the error
> piuparts gives for you on other arches? I'm guessing you didn't add
> the foreign architecture to the chroot that piuparts was using for
> testing.

It was (probably) my mistake, as I didn't run piuparts locally.

> I'm pretty sure the testing migration doesn't support
> cross-architecture dependencies, but the release team will hint things
> into testing where that is the only thing blocking migration.

If we take Multi-Arch serious, this shouldn't be the case, right?

>> My first thought was to limit the possible archs for python3-pyraf (by
>> explicitly setting the arch list and/or build-depending on iraf), but
>> this would not require the removal of the packages already build.
>
> Looks like you already tried this option, to get it to work you will
> have to ask the ftp-team to remove the obsolete binaries on the arches
> where pyraf no longer builds.
>
> https://qa.debian.org/excuses.php?package=pyraf

which is what I pargmatically did now (#886524). I was however not sure
what the optimal way is, since I also don't know which architectures are
co-runnable in practice. Theoretically, one could do anything with
qemu-userland, however.

Best regards

Ole

[1] https://github.com/olebole/zsvjmp/blob/master/zsvjmp-libunwind.c



Dependencies across architectures

2018-01-06 Thread Ole Streicher
Hi,

I have an "arch: all" package "python3-pyraf" (source package: "pyraf"),
that has a dependency on the package "iraf", but no build dependency on
that.

"iraf" exists only on selected architectures due to some required
assembler code for each arch and problems with big endian. python3-pyraf is
however buildable on all architectures.

In the first version of pyraf, I specified:

Package: python3-pyraf
Architecture: all
Depends: iraf

and iraf has:

Package: iraf
Architecture: arm64 armel armhf hurd-i386 linux-amd64 linux-i386
Multi-Arch: foreign

>From the description of "Multi-Arch: foreign" I would expect that this
allows the dependency resolved by using another architecture. However,
piuparts (and the migration excuses) claim a missing dependency on the
archs not supported by IRAF.

My first thought was to limit the possible archs for python3-pyraf (by
explicitly setting the arch list and/or build-depending on iraf), but
this would not require the removal of the packages already build.

And, in principle the dependency should work across archs (f.e. for
x32). But why does it not work with the specification above?

Best regards

Ole



How to switch all->any?

2017-12-04 Thread Ole Streicher
Hi,

I have a (Python 3) package that introduced some system dependent
binaries, and therefore the package had to switch from "all" to "any".

The package is not in testing in the moment. However, it also does not
migrate with the message "missing build on all: python3-sunpy (from
0.7.9-1)".

How can I get rid of that? 0.7.9-1 is in the Debian archives, but
neither part of testing, nor of unstable. It would however useful to
keep this version, f.e. for use in snapshots.d.o.

Best regards

Ole



Re: How to find Multi-Arch path(s)

2017-11-25 Thread Ole Streicher
Guillem Jover  writes:
> The point is that the Multi-Arch concept in Debian is all about the
> interfaces. How packages and files interface with each other, and
> what is possible and allowed. Some examples:
>
>   * A script might be arch-independent in the contents sense; i.e., it
> is the same on all architectures. But its interface might be
> arch-dependent, because itself uses processor or kernel specific
> interfaces, and its output changes depending on the architecture.
> These cannot be marked as Mutli-Arch foreign.
>   * A compiled binary might be arch-dependent in the contents sense;
> i.e., it is different on each architecture. But its interface might
> be arch-independent, because it does not change independently on
> where it is executed, or for what arch it was built for. These can
> be marked as Multi-Arch foreign.

Ahh. This is the point. So, there is (in my case) no reason to put *any*
binary to /usr/lib//iraf; they all should go to /usr/lib/iraf,
indepentent of the architecture. That means, that the main package
(f.e.) cannot be co-installed for different archs, but this is also not
required.

>   * A shared library that is being linked by some other package with
> executables, needs to match their architecture and needs to be
> coinstallable with itself, otherwise you could not install
> packages of different architectures linking againts that library.
> Say prog-a:i386 → libso:i386, and prog-b:amd64 → libso:amd64.
> These are usually as Multi-Arch same.

The package does not support shared libs yet -- as I said they made some
funny solutions: f.e. IRAF has its own incarnation of the libc, which is
implemented on the base of FORTRAN libs, and this lib is also called
"libc.a". To avoid using the wrong lib, this one is not linked by "-lc",
but by specifying the full path. Changing this to some shared lib
approach *and* being still compatible to third-party plugins is not
trivial and part of the longer-term evolution (depending on the success
of the package).

So, I have a development package with "libc.a" and other stuff, but also
some executables (compiler) which are arch dependent. Handling multiarch
here is probably not worth the effort -- I see no use case to have the
development environment for more than one arch co-installed, and
therefore I would put the contents (binaries and static libs) to
/usr/lib/iraf/ and mark it with "Multi-arch: no".

Would this be OK?

> So, say, your native arch (the one dpkg was built for) is amd64,
> and you have enabled i386 as a foreign arch. You then install the
> main iraf package for amd64 (the default), and then want to use some
> extension/plugin that is available only for 32-bit arches. apt for
> example will just pull the i386 version, because it'd be marked as
> Multi-Arch foreign and the dependencies would be fullfilled.

Looks like as I want it. Let me repeat with my own words (just to be
sure I understand it): I have

iraf   - Multi-arch: foreign, x86_64, i386, ...
iraf-sptables  - Multi-arch: foreign, i386; Depends: iraf

On a x86_64 with i386 enabled, when I do an "apt install iraf-sptables",
I get iraf as x86_64 and iraf-sptables as i386. Correct?

> Hope my clarifications above, clarified things. And regarding upstrea,
> I'd just remove the multilib support stuff. Although other distributions
> and upstreams seem to still have a strange love affair with that, so,
> dunno. :)

I am in contact with the Fedora astro guy, so in any case I will discuss
with him. But my fork is meant to have a chance to be included upstream,
so I will not do things that where I know they don't get accepted.

Thank you very much for your patience and your good explanations! I feel
now that I understand the multiarch idea a bit better (well, hopefully).

Best regards

Ole




Re: How to find Multi-Arch path(s)

2017-11-24 Thread Ole Streicher
Hi Guillem,

thanks for the quick answer.

Guillem Jover <guil...@debian.org> writes:
> On Fri, 2017-11-24 at 09:52:23 +0100, Ole Streicher wrote:
>> /usr/lib/${DEB_TARGET_MULTIARCH}/iraf
>
> It that was to be used, then it should be DEB_HOST_MULTIARCH, the
> _TARGET_ variants are for canadian cross-compilers. :) If this is not
> clear from the man page, I'm happy to clarify it further.

OK; I was not sure.

>> So, how can I canonically (ideally from C) retrieve a sorted list of
>> supported multi arch paths at runtime? Or is there another good way to
>> solve this? I would think it is a standard use case for multi arch,
>> isn't it?
>
> In general if you have to modify an upstream codebase to make it
> package-manager aware (be that dpkg, rpm or whatever), that to me seems
> like a big red sign too. In this case I think the problem is indeed that
> the original question is flawed, so there's no good answer. :)

IRAF is a quite old program (>>30 years), with some unconventional
solutions. And it is in "maintenance mode" yet.

The Multi-Arch solution upstream took is to put the binaries into
directories

/iraf/iraf/bin.linuxfor 32 bit Linux (x86)
/iraf/iraf/bin.linux64  for 64 bit Linux (x86)
/iraf/iraf/bin.macosx   for 32 bit Mac OSX
/iraf/iraf/bin.macintel for 32 bit Mac OSX

(similar subdirs bin. are under /iraf/iraf/unix, /iraf/iraf/noao etc.)

and then have explicit if statements "if 64-bit linux also look in
32-bit dirs".  This is not applicable for Debian; first because of the
FHS violation; but also because other archs are not really possible. The
ARM architecture is however a working platform, with some use cases
(people want to run it on their raspberry).

Therefore I move everything unter /iraf/iraf to /usr/share/iraf (because
it is arch independent), except the bin.* dirs, which in a Debian
Multiarch environment should go into /usr/lib//iraf, right?

> So, going back. AFAIUI the iraf project supports plugins/extensions in
> the form of executables. 

Yes.

> And some might only be available in a single arch.

No. They are available under 32-bit archs. At least armhf and
i386 (linux + kfreebsd). Maybe x32.

> If that's the case, that looks like those extensions should be
> placed under a /usr/lib/iraf (or similar, perhaps even /usr/libexec if
> we allowed that!), and those package be marked "Multi-Arch: foreign",
> then that's the package manager's problem to choose the most
> appropriate architecture for those binaries (perhaps by using the
> futurable executable attribute of an arch).

This does not work: I have no way to execute from these binaries the
correct 64-bit binary, since it does not know its directory (the
multiarch dir known at compile time is i386-linux-gnu, not
x86_64-linux-gnu).

And I also don't think it is a clean way to make the place of a 32-bit
executable dependent on whether a 64-bit executable exists. Also I would
also like to be able to choose the 32-bit plugin even when a 64-bit one
exists.

I could even think of more exotic test cases, like loading some
qemu-aware kernel modules that enable to run armhf binaries on my intel
machine, and then debug a small plugin for armhf -- so the list of
supported archs may even change at runtime.

To me, this all looks like the perfect use case for "Why do we need
Multi-Arch at the end?", and that's why I would like to have it
implemented in a clean way. I don't see that the problem appears from an
ugly upstream code either: I have no problem with heavily patching it
(in fact, the version being packages is my own fork that significantly
deviates from the original code base). If you think this should be
solved upstream, please tell me how :-)

Best regards

Ole



How to find Multi-Arch path(s)

2017-11-24 Thread Ole Streicher
Hi,

I want to package a software, "iraf" (with extensions) that uses some
system dependent binaries internally. Some of the extensions will be
available in 32 bit only, so this is a good use case for
Multi-Arch. That means, that the binaries will go to

/usr/lib/${DEB_TARGET_MULTIARCH}/iraf

At run time, I would now need to get the list of paths that are
supported by the system, in their "preferred" order (so, even from a
binary compiled for i386, it would be preferred to call a x86_64 binary
if that is supported on the system). This list is generally not known at
compile time, since it depends on the details of the target system
configuration (f.e. an architecture may be supported via a software
emulation).

However, I could not find out how to get this list?

"dpkg --print-architecture" and "dpkg --print-foreign-architectures"
gives only what dpkg is configured for, not what is supported as
executable. And, it does not return the multi-arch triplet.

"dpkg-architecture -q DEB_HOST_MULTIARCH" may give the host architecture
(default, or specified by the arch name), but from the manpage and the
package description of dpkg-dev it is not intended for runtime, but for
package build/development.

So, how can I canonically (ideally from C) retrieve a sorted list of
supported multi arch paths at runtime? Or is there another good way to
solve this? I would think it is a standard use case for multi arch,
isn't it?

Best regards

Ole



Re: ocaml not migrating?

2017-10-12 Thread Ole Streicher
Hi Andrey,

Andrey Rahmatullin <w...@debian.org> writes:
> On Thu, Oct 12, 2017 at 10:03:46AM +0200, Ole Streicher wrote:
>> since a few days, ocaml has the "Migration status: OK: Will attempt
>> migration", but it does not migrate.

> https://release.debian.org/transitions/html/ocaml.html (I guess)

I am wondering why this is not shown on the ocaml tracker page?

Cheers

Ole



ocaml not migrating?

2017-10-12 Thread Ole Streicher
Hi,

since a few days, ocaml has the "Migration status: OK: Will attempt
migration", but it does not migrate.

https://tracker.debian.org/pkg/ocaml

Since I have a few packages depending on ocaml (namely plplot and its
reverse dependencies, gnudatalanguage) which are set to AUTORM, I am
wondering what prevents ocaml (and its dependencies) from migrating?

Best regards

Ole



Piumparts problem

2017-10-03 Thread Ole Streicher
Hi,

one of my packages is marked as "Rejected due to piuparts regression",
but I don't understand the log message.

https://piuparts.debian.org/sid/fail/postgresql-pgsphere_1.1.1+2017.08.30-1.log

When I grep for "error:" there, the only thing I see here is:

0m13.8s ERROR: Command failed (status=100): ['chroot', 
'/srv/piuparts.debian.org/tmp/tmpivwAUi', 'apt-cache', 'show', 
'postgresql-pgsphere=1.1.1+2017.08.30-1']
  E: No packages found

What does piuparts want to tell me here? And what is wrong with the
package?

Best regards

Ole



Build-dependencies for qt5

2017-08-27 Thread Ole Streicher
Hi,

I am adopting a package (plplot) that depends on (Qt4 or) Qt5. The
description of the "qt5-default" package, says that this package should
not be used to build a dependent package but to look into

http://pkg-kde.alioth.debian.org/packagingqtbasedstuff.html

instead. From this, I take that I have to use "export QT_SELECT = 5" in
d/rules; but what are then my required build dependencies? Just
qtchooser? Or do I need to install any qt5 development package as well?

Best regards

Ole



Re: Package not migrating

2017-08-19 Thread Ole Streicher
Hi Nils,

Niels Thykier <ni...@thykier.net> writes:
> Ole Streicher:
>> Andrey Rahmatullin <w...@debian.org> writes:
>>> On Thu, Aug 17, 2017 at 08:42:37PM +0200, Ole Streicher wrote:
> The package is affected by the same issue that chocolate-doom was in the
> referenced bug (#824169).  The situation in summary:
>
>  * The source produces 1 or more binaries in "main" and 1 or more
>binaries in "contrib"
>
>  * During upload, dak can (mistakenly) end up putting the source in
>both main and contrib at the same time.  Technically, it ends up in
>different suites (unstable vs unstable-debug), but these suites have
>to agree.

Can't they be rmeoved manually here? Or with a dirty script as a workaround?

>  * Once britney requests dak to migrate the package to testing, dak
>will notice the issue and reject the import (resulting in a
>rollback of all changes to testing).
>
>  * The quickest way to untangle the situation is to block the affected
>package (i.e. ensure britney will not migrate it), so other packages
>can migrate to testing.  This is most likely why cpl is now blocked
>by a manual hint.

But this will prevent from fixing bugs of the package in testing, doing
transisions etc. Currently the new package fixes a bug on armhf, which
is not that important due to the limited user base, but for sure there
will come RC bugs, transitions in this package (which then will also
prevent the reverse dependencies to migrate), or even transitions in an
dependent package (and then the transition will be stuck on a
non-migratable package). The solution to block packages which are
perfectly fine seems to be a quick, but also a dirty solution.

Is there a reason why the bug severity is "important" and not "serious"?
A bug that is preventing other packages from reaching "testing" (and,
in a few years, "stable" aka then "buster") should be RC, shouldn't it?
(And, shouldn't the affected packages be tagged as "affects" in the bug?)

I am also wondering why this did not happen before? The cpl package
structure is unchanged since years, without any reported problems.

> With the situation clarified, here is how it can be fixed:
>
>  1a. Have dak patched so it does not do this again.  Depending on the
>  exact implementation, it may need to be combined with 2).  Once
>  that is resolved, it will also need 3)

Which is nothing where I could do much, due to my non-existing dak knowledge.

>  1b. Avoid having a source package that builds binaries in multiple
>  components.  Sadly, this often implies duplicating the majority
>  of the source package.  Combine with 2) + 3)

Sounds like much work to work around a bug somewhere else. Especially
when the bug is temporary; since then one would want to revert it after
fixing.

>  1c. *Maybe* the FTP masters can work around this (I don't remember)
>  on their side on a per package basis.  This will need to be
>  combined with 2) (although, the FTP masters will probably do it
>  as the same time as this item) + 3).

So, this seems to be the way to go, right? Shall I just open a bug for
ftp-masters asking for unblock? Or what should I do here?

Best regards

Ole



Re: Package not migrating

2017-08-18 Thread Ole Streicher
Andrey Rahmatullin <w...@debian.org> writes:
> On Thu, Aug 17, 2017 at 08:42:37PM +0200, Ole Streicher wrote:
>> * Not touching package due to block request by adsb (check
>>   https://release.debian.org/testing/freeze_policy.html if update is
>>   needed) 
> https://release.debian.org/britney/hints/adsb:
> # 20170720
> # in both main and contrib, breaks britney / dak
> # (#824169)
> block cpl

And what should I do to get it migrated?

Best

Ole



Package not migrating

2017-08-17 Thread Ole Streicher
Hi,

I have a package (cpl) that did not migrate since 38 days, but I don't
see a reason:

Excuse for cpl

* Migration status: BLOCKED: Needs an approval (either due to a freeze
  or due to the source suite) 
* 38 days old (needed 10 days)
* Not touching package due to block request by adsb (check
  https://release.debian.org/testing/freeze_policy.html if update is
  needed) 
* Piuparts tested OK - https://piuparts.debian.org/sid/source/c/cpl.html
* Not considered 

The freeze URL above speaks about the Buster freeze, but isn't that a
bit early?

Best regards

Ole



Re: Linitian orig-tarball-missing-upstream-signature

2017-07-31 Thread Ole Streicher
Hi Paul, Christian,

Christian Seiler <christ...@iwakd.de> writes:
> On 07/31/2017 10:54 AM, Paul Wise wrote:
>> On Mon, Jul 31, 2017 at 4:24 AM, Ole Streicher wrote:
>>> is not really helpful to me; at least I did not find a mention in the
>>> Debian policy that the signature should be included in the .changes
>>> file. Also, it seems that the standard (pdebuild) toolchain does not
>>> include it by default.
>> 
>> Policy documents current practice rather than describing what
>> practices should be taken, so I think that we will only get this in
>> policy once it is more common.

Hmm, but the right place to discuss what practices should be taken is
debian-devel, right? Especially if it has impact on the packaging
workflow. That's why I was wondering why it was not discussed there.

>> The standard toolchain here is uscan, not pdebuild, and there is a bug
>> asking placing the signatures in the correct place open already, it
>> just needs someone to do the work:

Shouldn't this be fixed before a new error is introduced?

> How does this interact with git-based workflows? Currently I use
> pristine-tar (in combination with gbp) for all of the packages I
> maintain. [1]

Oops, this is my case as well. At least my usual workflow:

gbp import-orig --uscan
gbp pq rebase
gbp dch -R --commit 
gbp buildpackage --git-tag

seems to be broken here -- at least the signature is not in the
pristine-tar branch (and I don't know how it shall get there, and how it
shall get back out).

Since I am not able to fix uscan (as an almost-perl-illiterate), and I
also don't know about how this will influence the git-buildpackage
workflow, I would for the moment just ignore the error.

Any other advice?

Best regards

Ole



Linitian orig-tarball-missing-upstream-signature

2017-07-31 Thread Ole Streicher
Hi,

since the last lintian update, I get an error 

orig-tarball-missing-upstream-signature

for new packages which have the PGP signature check enabled
(f.e. python-astropy, or python-astropy-helpers). The description

| The packaging includes an upstream signing key but the corresponding
| .asc signature for one or more source tarballs are not included in
| your .changes file.

is not really helpful to me; at least I did not find a mention in the
Debian policy that the signature should be included in the .changes
file. Also, it seems that the standard (pdebuild) toolchain does not
include it by default.

What is the preferred way to included the upstream signature?

Was there some discussion about this in debian-devel that I missed?

Best regards

Ole



Re: How to make a shared lib recognized by debhelpers?

2017-07-18 Thread Ole Streicher
Gert Wollny <gw.foss...@gmail.com> writes:
> Am Montag, den 17.07.2017, 21:20 +0200 schrieb Ole Streicher:
>
>> How can I do a proper handling of the library here? I guess (I am not
>> an octave expert, however), that the name of the library shall not be
>> changed.
> One way to make dh_strip recognize files that are not in the typical
> name pattern is to make it executable.

Thanks for the hint. What I do now (in d/rules) is basically:

override_dh_auto_install:
dh_auto_install
chmod ugo+x 
debian/tmp/usr/lib/*/octave/site/oct/api-*/*/plplot_octave.oct

override_dh_shlibdeps:
dh_shlibdeps
chmod ugo-x 
debian/octave-plplot/usr/lib/*/octave/site/oct/api-*/*/plplot_octave.oct

This seems to work well. A debug package is created as well.

Best regards

Ole



Re: How to make a shared lib recognized by debhelpers?

2017-07-18 Thread Ole Streicher
James Cowgill  writes:
> You have been hit by bug #35733 in debhelper. Possibly #862909 might
> apply here as well.

Whow! That is quite old.

I am wondering why I can't just put the library into .shlibs
(which is mentioned, but not documented in the dh_makeshlibs manpage) --
in this case I get additionally the warning
"pkg-has-shlibs-control-file-but-no-actual-shared-libs".

> I don't see an easy fix for this, so unfortunately you might have to
> keep the existing workarounds in the packaging.

How would I ensure than the creation of the automatic debug package?
When I just use the existing "strip" command, the debugging symbols just
get removed, and also, how does dh know that it shoud build a debug
package if it does not know that it contains a shared lib?

Best regards

Ole



Re: How to make a shared lib recognized by debhelpers?

2017-07-18 Thread Ole Streicher
Andrey Rahmatullin  writes:
> What does file(1) return for this file?

The expected:

$ file plplot_octave.oct 
plplot_octave.oct: ELF 64-bit LSB shared object, x86-64, version 1 (SYSV), 
dynamically linked, BuildID[sha1]=98a4031426db920f83eb6bd2ac63b52be705fee8, not 
stripped

I also crosschecked the build log, it is a usual build for a shared
library:

/usr/lib/ccache/c++  -fPIC -g -O2 
-fdebug-prefix-map=/build/plplot-5.12.0+dfsg=. -fstack-protector-strong 
-Wformat -Werror=format-security -Wdate-time -D_FORTIFY_SOURCE=2 -Wdate-time 
-D_FORTIFY_SOURCE=2 -Wl,-z,relro -Wl,--as-needed -shared  -o plplot_octave.oct 
CMakeFiles/plplot_octave.dir/plplot_octaveOCTAVE_wrap.cxx.o 
-Wl,-rpath,/build/plplot-5.12.0+dfsg/obj-x86_64-linux-gnu/src: 
../../src/libplplot.so.14.0.0 -loctave -loctinterp 

Best regards

Ole



How to make a shared lib recognized by debhelpers?

2017-07-17 Thread Ole Streicher
Hi,

I am currently adopting plplot [1], which included a simplification and
modernization of the build system (using modern debhelpers).

I now have the problem, that one of the shared libraries is probably not
detected correctly: "lintian" tells me

E: octave-plplot: unstripped-binary-or-object 
usr/lib/x86_64-linux-gnu/octave/site/oct/api-v51/x86_64-pc-linux-gnu/plplot_octave.oct
E: octave-plplot: missing-dependency-on-libc needed by 
usr/lib/x86_64-linux-gnu/octave/site/oct/api-v51/x86_64-pc-linux-gnu/plplot_octave.oct

The Depends: field of the package however contains the entry
${shlibs:Depends}, and this also works correctly for the other binary
packages built from the source.

The manpages of dh_makeshlibs, dh_shlibdeps and dh_strip didn't
enlighten me here. In the original plplot package, "strip" was called
directly in install-arch, which doesn't look very smart, because it
f.e. also prevents the creation of a proper debug image, and
dpkg-shlibdeps needs to be called lated as well explicitly for this
library.

How can I do a proper handling of the library here? I guess (I am not an
octave expert, however), that the name of the library shall not be
changed.

Best regards

Ole



Re: mpgrafic - mpirun test program as root in automatic build

2017-01-18 Thread Ole Streicher
Paul Wise <p...@debian.org> writes:
> On Wed, Jan 18, 2017 at 3:58 PM, Ole Streicher wrote:
>
>> Also when using cowbuilder? At least I see the whole build done by root
>> when running in my cowbuilder chroot. That was the point that lead to
>> the trouble here...
>
> Yep. I tested this with id and override_dh_auto_* in cowbuilder:
>
>  fakeroot debian/rules clean
>debian/rules override_dh_auto_clean
> uid=0(root) gid=0(root) groups=0(root),1234(pbuilder)
>  debian/rules build
>debian/rules override_dh_auto_configure
> uid=1234(pbuilder) gid=1234(pbuilder) groups=1234(pbuilder)
>debian/rules override_dh_auto_build
> uid=1234(pbuilder) gid=1234(pbuilder) groups=1234(pbuilder)
>debian/rules override_dh_auto_test
> uid=1234(pbuilder) gid=1234(pbuilder) groups=1234(pbuilder)
>  fakeroot debian/rules binary
>debian/rules override_dh_auto_install
> uid=0(root) gid=0(root) groups=0(root),1234(pbuilder)

OK, I finally found it: I had a line 

BUILDUSERNAME=

in my .pbuilderrc, which was obviously interpreted as root.

Thanks

Ole



Re: mpgrafic - mpirun test program as root in automatic build

2017-01-18 Thread Ole Streicher
Paul Wise  writes:
> On Wed, Jan 18, 2017 at 3:37 PM, Boud Roukema wrote:
>
>> I guess by "both of these" you mean "most of the build steps (apart from
>> the 'debian/rules install' step)"?
>
> What I wrote wasn't clear and wasn't strictly true, sorry!
>
> When manually building from source:
>
> You always build/test as a normal user.
> You install as either root or normal user, depending on the install
> prefix.
>
> When doing Debian package builds:
>
> You always build/test as a normal user.
> You always install using fakeroot.

Also when using cowbuilder? At least I see the whole build done by root
when running in my cowbuilder chroot. That was the point that lead to
the trouble here...

Best

Ole



Re: mpgrafic - mpirun test program as root in automatic build

2017-01-17 Thread Ole Streicher
James Cowgill <jcowg...@debian.org> writes:
> On 16/01/17 23:58, Boud Roukema wrote:
>> Since, in general, there is no reason for mpirun to run as root,
>> the sid version of mpirun (from openmpi) apparently refuses to run as root.
>> (I have not reproduced this behaviour myself - Ole Streicher
>> has warned me about it.) The openmpi developers provide an option
>> --allow-run-as-root.
>
> I'm not sure I follow. Debhelper runs the testsuite during the build
> target so it shouldn't be run as root anyway. I don't think you need any
> workarounds at all for this.

I (as Bouds sponsor) have the problem that in my cowbuilder the build is
done as root, leading to the questioned error message and a failure of
the test and the build. Maybe in my setup something is wrong?

Best regards

Ole



Re: Package not migrating

2017-01-12 Thread Ole Streicher
Andrey Rahmatullin <w...@debian.org> writes:
> On Thu, Jan 12, 2017 at 10:33:58AM +0100, Ole Streicher wrote:
>> Hi,
>> 
>> I still do not completely understand all causes why a package does not
>> migrate:
>> 
>> sinpy is a valida candidate but doesn't migrate. None of the pages shows
>> a reason:
>> 
>> https://qa.debian.org/excuses.php?package=sunpy
>> https://release.debian.org/britney/update_excuses.html#sunpy
> https://release.debian.org/britney/update_output.txt OTOH says:
>
> trying: sunpy
> skipped: sunpy (0, 0, 255)
> got: 36+0: a-3:i-23:a-0:a-0:a-0:m-0:m-7:m-0:p-0:s-3
> * s390x: python-sunpy, python3-sunpy

How do I interpret this? The buildd status 

https://buildd.debian.org/status/package.php?p=sunpy

has "Installed" as status for python-sunpy.

Cheers

Ole



Package not migrating

2017-01-12 Thread Ole Streicher
Hi,

I still do not completely understand all causes why a package does not
migrate:

sinpy is a valida candidate but doesn't migrate. None of the pages shows
a reason:

https://qa.debian.org/excuses.php?package=sunpy
https://release.debian.org/britney/update_excuses.html#sunpy

At the same time, other packages migrate, like python-astropy which was
uploaded at the same time:

https://tracker.debian.org/news/831295

What is the cause for sunpy and why is nothing displayed in the
mentioned links?

Best regards

Ole



Re: debian/watch: FTP with version encoded (only) in directory

2017-01-07 Thread Ole Streicher
Hi Paul,

Paul Wise  writes:
> With just the ftp site alone it can't work (see below), luckily for
> you there is a github page:
>
> http://www.star.bristol.ac.uk/~mbt/stilts/#install
> https://github.com/Starlink/starjava/releases
>
> So this monstrosity should work:
>
> version=3
> options="uversionmangle=s/\-/./,downloadurlmangle=s{.*/stilts-([\d\.\-]+)\.zip}{ftp://andromeda.star.bris.ac.uk/pub/star/stilts/v$1/stilts_src.zip};
> \
> https://github.com/Starlink/starjava/releases 
> .*/archive/stilts-([\d\.\-]+).zip

Thank you. I am however a bit afraid since this depends that upstream
keeps both really consistent.

> It won't work with the ftp site because uscan doesn't let the version
> number be in the directory name and the downloadurlmangle workaround
> doesn't work with ftp. I think both of these are probably things that
> uscan needs to support. There may be bugs for them, please check and
> if not, file new ones.

I will have a look; however it may be faster to ask upstream to have a
better supported url.

Best regards

Ole



debian/watch: FTP with version encoded (only) in directory

2017-01-05 Thread Ole Streicher
Hi,

I have the following sample download URL

ftp://andromeda.star.bris.ac.uk/pub/star/stilts/v3.0-9/stilts_src.zip

Corresponding Debian version number should be 3.0.9.

I tried

version=3
options="uversionmangle=s/\-/./,filenamemangle=s/\/$/.zip/" \
ftp://andromeda.star.bris.ac.uk/pub/star/stilts/v([\d\.\-]+)/ stilts_src.zip

but it doesn't work; basically it adjusts upstream version to be 1:

uscan info: Newest upstream tarball version selected for download 
(uversionmangled): 1

How can I get this right?

Cheers

Ole




Non-understandable piuparts

2016-12-13 Thread Ole Streicher
Hi,

I have a piuparts message that I don't understand:

https://piuparts.debian.org/sid/fail/python-pyvo_0.4.1+dfsg-1.log

it basically complains the python-requests could not be installed;
however this package is available both in sid and testing (for more than
6 months without change), and manually installing python-pyvo also works
on both distributions.

Could someone decrypt the piuparts log please?

Best regards

Ole



Package not migrating

2016-11-30 Thread Ole Streicher
Hi,

I have a package (casacore-data-tai-utc), that doesn't migrate to
testing, even if it is marked as "valid candidate":

excuses:
 * 12 days old (needed 10 days)
 * casacore-data-tai-utc/i386 unsatisfiable Depends: python3-casacore
 * Valid candidate

The second item comes from the fact, that the first release of package
was errornously done with "Arch: any" instead of the correct "Arch:
all". This was fixed with the current version.

How do I get rid of this excuse?

Cheers

Ole



Re: Debian privacy policy

2016-11-17 Thread Ole Streicher
Paul Wise <p...@debian.org> writes:
> On Thu, Nov 17, 2016 at 12:17 AM, Ole Streicher wrote:
>> a reference that Debian prefers strong privacy
>
> AFAICT we don't have an official statement about this, but:
> https://lists.debian.org/debian-policy/2008/02/msg00060.html [...]

Is there a reason why it is not there?

> Policy says:
>
> For packages in the main archive, no required [debian/rules] targets
> may attempt network access.

That is different. debian/rules targets don't attempt network access in
my case. It is the final program which does it.

> My personal opinion is that Debian policy should be:
>
> Debian packages must respect sysadmin and user privacy and encourage
> sysadmins and users to respect the privacy of everyone. So, disabled
> by default, informed consent and don't manipulate people into
> destroying their privacy with click-through stuff. Some discussion of
> click-through culture is in the recent episode of FaiF:
>
> http://faif.us/cast/2016/nov/01/0x5E/

I observe that the common opinion in Debian is strictly pro privacy --
but why it is not in the policy? It is quite hard to discuss those
topics with upstream if there is no reference to a settled opinion, but
rather a number of lengthy discussions.

Best regards

Ole



Re: Debian privacy policy

2016-11-17 Thread Ole Streicher
Sean Whitton <spwhit...@spwhitton.name> writes:
> On Wed, Nov 16, 2016 at 05:17:32PM +0100, Ole Streicher wrote:
>> for a discussion with upstream (removal of a default "anonymously
>> logging home" feature), I would like to have a reference that Debian
>> prefers strong privacy (no default logging, even not anonymously) over
>> usefullness for upstream. The only point I could find is the "Our
>> priorities are our users and free software" point in the Social
>> Contract, which is very general and interpretable. The policy seems to
>> be silent about this. Did I just not look careful enough?
>
> We have several Lintian warnings connected to privacy (see tags
> privacy-breach-*).  Lintian reflects community consensus on packaging
> standards as much as policy does.

Hmm, I rather would like to cite the consensus itself than its
reflection.

Lintian is just a tool to help us to create good packages; however it
cannot be a reference.

First example: Multi-Arch is community consensus. However, a lintian
warning for non-multiarch packages is not there, despite that there is a
bug asking for it (#724988).

Second example: For Python it is common practise to put everything into
/usr/lib/$arch/python* if the package is arch dependent -- including
in-package images. However, one gets a lintian warning about that.

--> Lintian just gives a hint, not a reference. A short while ago, there
was a even a general complaint in some mailing list that "Make Lintian
happy" is a bad explanation to implement some change (can't find it
anymore, and don't know the author).

Best regards

Ole



Debian privacy policy

2016-11-16 Thread Ole Streicher
Hi,

for a discussion with upstream (removal of a default "anonymously
logging home" feature), I would like to have a reference that Debian
prefers strong privacy (no default logging, even not anonymously) over
usefullness for upstream. The only point I could find is the "Our
priorities are our users and free software" point in the Social
Contract, which is very general and interpretable. The policy seems to
be silent about this. Did I just not look careful enough?

Cheers

Ole



Re: ITP's not showing up on debian-devel?

2016-11-08 Thread Ole Streicher
Andrey Rahmatullin  writes:
> On Mon, Nov 07, 2016 at 11:28:55PM -0800, Walter Landry wrote:
>> Hi Everyone,
>> 
>> I recently posted two ITP's
>> 
>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=843325
>> http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=843570
>> 
>> The confirmation message said that they would be forwarded to
>> debian-devel, but it has been a while and I do not see either of them
>> there.  Did I mess something up in the ITP, is there some moderation
>> queue, or are my emails getting eaten?
> Your emails are getting eaten.
> https://lists.debian.org/debian-devel/2016/11/msg00281.html
> https://lists.debian.org/debian-devel/2016/11/msg00282.html

No, they don't. The two mails you refer to were manually (re-)sent
(bounced) by me, not by the bug system (check the "Resent-From:"
header).

Cheers

Ole



Re: Data updates in debian packages

2016-10-31 Thread Ole Streicher
Paul Wise <p...@debian.org> writes:
> On Sat, Oct 29, 2016 at 8:45 PM, Ole Streicher wrote:
>> The package in question (casacore) wants them in a specific format "CASA
>> table" (which is uniformly used within that package), and dependent
>> packages access this in that specific format. The only way would be to
>> create this table from another leap second table (instead of our current
>> source usno.navy.mil), and to update this every time the original table
>> is updated (which I would have to learn how to do this).
>
> You can use dpkg triggers to update files in response to packages
> updating other files.

I tried this, namely (the source package and has only one binary
package):

 debian/triggers --
interest /usr/share/zoneinfo/leap-seconds.list
---8<--

 debian/postinst --
#!/bin/sh

set -e

case "$1" in
triggered|configure)
casacore-update-tai_utc
;;
abort-upgrade|abort-remove|abort-deconfigure)
;;
*)
echo "postinst called with unknown argument \`$1'" >&2
exit 1
;;
esac

#DEBHELPER#
---8<--

However, I now get the following error when I try to update tzdata:

dpkg: cycle found while processing triggers:
 chain of packages whose triggers are or may be responsible:
  casacore-data-tai-utc -> casacore-data-tai-utc
 packages' pending triggers which are or may be unresolvable:
  casacore-data-tai-utc: /usr/share/zoneinfo/leap-seconds.list
dpkg: error processing package casacore-data-tai-utc (--configure):
 triggers looping, abandoned
Errors were encountered while processing:
 casacore-data-tai-utc

What is my mistake here?

Best regards

Ole



Re: Data updates in debian packages

2016-10-31 Thread Ole Streicher
Christian Seiler <christ...@iwakd.de> writes:
> On 10/31/2016 09:07 AM, Ole Streicher wrote:
[leap seconds]
>> We need it to put correct time on astronomical registrations, so it is
>> most important to have them once they are effective. Having them in
>> advance would be an additional plus, however, since f.e. a computer may
>> be disconnected during/after the observation, if that happens on a place
>> without internet connection.
>
> Data might help here, so I've looked at the past 3 leap seconds that
> were introduced [...]
>
> What this does say is that stable/updates and oldstable (LTS) had
> updated leap seconds information slightly less than 3 months before
> the leap second, in some cases even a bit earlier. [...]
>
> Hope this information helps in you evaluating this.

Thank you very much for this detailed information! This helps a lot for
the decision (we will depend on tzdata), and it gives also a good
argument for discussion upstream.

Best regards

Ole



Re: Data updates in debian packages

2016-10-31 Thread Ole Streicher
Russ Allbery  writes:
> The required timeliness depends a lot on what you're using leap seconds
> for, and in particular if you need to know about them far in advance, or
> if it's only necessary to have an updated table before the leap second
> itself arrives.

We need it to put correct time on astronomical registrations, so it is
most important to have them once they are effective. Having them in
advance would be an additional plus, however, since f.e. a computer may
be disconnected during/after the observation, if that happens on a place
without internet connection.

Best regards

Ole



Re: Data updates in debian packages

2016-10-30 Thread Ole Streicher
On 30.10.2016 04:38, Paul Wise wrote:
> On Sat, Oct 29, 2016 at 8:45 PM, Ole Streicher wrote:
>> The update script itself could even be distributed with the casacore
>> package itself. And for simplicity I would make
>> casacore-data-autoupdater a binary package within the casacore source
>> package (since this is the main dependency anyway).
>>
>> Comments on that? What would be the best dependency specification then?
>> casacore-data-autoupdater "suggests" casacore-data-XXX and/ore vice-versa?
> 
> casacore-data-autoupdater Enhances: casacore-data-XXX

Isn't this redundant? I always thought that "Enhances" is just the
reverse of "Suggests" ("A enhances B  <=> B suggests A").
The disadvantage of "Enhances" would be that it would need to know which
packages there are -- so every time a new data package is added, we
would need to update the updater package.

> casacore-data-XXX Recommends: casacore-data-autoupdater

This would raise privacy concerns, since recommended packages are
installed by default, and this one would connect to some .mil domain
servers. Why not "suggests"?

>>> Make sure that any security/privacy consequences of the non-apt update
>>> method are dealt with.
>>
>> If you have comments on my proposal, please comment.
> 
> I don't know enough about the formats and the download processes to comment.

Formats, download processes and further processing are data dependent
(and therefore part of the casacore-data-XXX package). The autoupdater
would just execute the update scripts that need to be provided by the
individual packages.

Best regards

Ole



Re: Data updates in debian packages

2016-10-30 Thread Ole Streicher
On 30.10.2016 04:42, Paul Wise wrote:
> On Sun, Oct 30, 2016 at 4:36 AM, Ole Streicher wrote:
> 
>> The canonical source for leap seconds is the IERS. Our current plan was
>> to take the leap second list from there and build our package from this
>> (as it is done in the casacore-data upstream). This guaranteed that we
>> always have the actual definition (... as long as we do our updated
>> package ASAP).
>>
>> When we switch that to tzdata, then we get the leap second from a place
>> that is not strictly the original source, but may have some delay: first
>> the tzdata upstream package needs to be updated, and then it needs to be
>> packaged (... and possibly backported).
>>
>> So my question is: how safe is it to assum that this whole process is
>> quick (let's say: a few weeks)? If someone works later on Stretch and
>> has an outdated leap second, this could cause problems. Especially if he
>> has no direct information about the actuality of the leap second
>> definition (which he would have in the case of an leap second package
>> taking the value directly from IERS -- we could use the date of the
>> announcement as version number there).
> 
> Where does the IERS data come from?

IERS is the instance which actually decides about the leap second,
namely by this file:

ftp://hpiers.obspm.fr/iers/bul/bulc/bulletinc.dat

I couldn't find the original source now, but see f.e. wikipedia: "Among
its other functions, the IERS is responsible for announcing leap seconds."

> I think the tzdata version of the data comes from the IETF:

IETF is responsible for internet standards, not for leap seconds. They
will take the leap seconds from IERS. I would assume that this
connection is well-established to rely on it. I was not so much
questioning upstream here, but I worry a bit about the Debian package
for tzdata: how sure can I be that the tzdata is actual (wrt upstream)?

Best regards

Ole



Re: Data updates in debian packages

2016-10-29 Thread Ole Streicher
Ben Finney <bign...@debian.org> writes:
> Ole Streicher <oleb...@debian.org> writes:
>> How sure can one be that they will be installed in-time?
>
> This confuses me too. If the file is installed, you have the
> leap-seconds data for the installed version of ‘tzdata’.
>
> So I think I don't understand. What specific concern do you have about
> the leap seconds data from the ‘tzdata’ package?

The canonical source for leap seconds is the IERS. Our current plan was
to take the leap second list from there and build our package from this
(as it is done in the casacore-data upstream). This guaranteed that we
always have the actual definition (... as long as we do our updated
package ASAP).

When we switch that to tzdata, then we get the leap second from a place
that is not strictly the original source, but may have some delay: first
the tzdata upstream package needs to be updated, and then it needs to be
packaged (... and possibly backported).

So my question is: how safe is it to assum that this whole process is
quick (let's say: a few weeks)? If someone works later on Stretch and
has an outdated leap second, this could cause problems. Especially if he
has no direct information about the actuality of the leap second
definition (which he would have in the case of an leap second package
taking the value directly from IERS -- we could use the date of the
announcement as version number there).

Best regards

Ole



Re: Data updates in debian packages

2016-10-29 Thread Ole Streicher
Hi Paul,

On 29.10.2016 03:37, Paul Wise wrote:
> On Fri, Oct 28, 2016 at 6:38 PM, Ole Streicher wrote:
>> We have the problem (I am not sure whether I posted about this already),
>> that the "casacore" package needs additional "casacore-data-XXX"
>> packages, providing the basic data to work with casacore. Some of the
>> data are almost immutable, others (for example leap seconds) are
>> changing every year or so, and others change quite rapidly (high
>> precision ephemides forecasts). They all can be downloaded from some FTP
>> servers.
> 
> FYI leap seconds are already packaged multiple times in Debian, so
> please do not add another copy of them.

The package in question (casacore) wants them in a specific format "CASA
table" (which is uniformly used within that package), and dependent
packages access this in that specific format. The only way would be to
create this table from another leap second table (instead of our current
source usno.navy.mil), and to update this every time the original table
is updated (which I would have to learn how to do this).

Probably the canonical source would be:

> tzdata: /usr/share/zoneinfo/leap-seconds.list

however it worries me a bit that leap seconds are not directly mentioned
there. How sure can one be that they will be installed in-time?

>> How should the update service work? Can it just overwrite the existing
>> files? How one should handle if an update (with possibly older data) in
>> installed to not downgrade the data?
> 
> Check out pciutils/usbutils and similar.
> 
> Essentially:
> 
> Make the applications look in /var by default.
> 
> Put the packaged data in /usr/share.
> 
> Have the postinst symlink from /var to /usr/share when the /var
> location is missing or older than the /usr/share location.

Looks like a plan ;-) I'll start there. What would be the default place?
/var/lib/casacore/data?

> Have an update script that can be run by the sysadmin or from cron
> that downloads the latest version and atomically replaces the data in
> the /var location.

What I would do here is a separate package "casacore-data-autoupdater"
that provides that service for all installed casacore-data-XXX packages.
That package would install itself into /etc/cron.daily and, when called,
check the age of each installed data table and update if necessary.
Having this service centralized would avoid a debconf script for each
package to ask the user several times for if he wants to auto-update
that table.

The name and the description of the package would make it clear that it
will access the data via net.

The update script itself could even be distributed with the casacore
package itself. And for simplicity I would make
casacore-data-autoupdater a binary package within the casacore source
package (since this is the main dependency anyway).

Comments on that? What would be the best dependency specification then?
casacore-data-autoupdater "suggests" casacore-data-XXX and/ore vice-versa?

> Make sure that any security/privacy consequences of the non-apt update
> method are dealt with.

If you have comments on my proposal, please comment.

Best regards

Ole



Data updates in debian packages

2016-10-28 Thread Ole Streicher
Hi,

We have the problem (I am not sure whether I posted about this already),
that the "casacore" package needs additional "casacore-data-XXX"
packages, providing the basic data to work with casacore. Some of the
data are almost immutable, others (for example leap seconds) are
changing every year or so, and others change quite rapidly (high
precision ephemides forecasts). They all can be downloaded from some FTP
servers.

My question is now how to provide a good and consistent packaging:
Usually, one would just put the data into a package. This works nicely
for the immutable data, and reasonably for the slowly changing data. The
fast changing data shall be available for all people, but not everyone
needs a daily update. So, for consistency, and to have them available in
CI and build time tests, I would like to also package them directly, but
then to provide an (optional) update service.

How should the update service work? Can it just overwrite the existing
files? How one should handle if an update (with possibly older data) in
installed to not downgrade the data?

Is there any experience with that?

Best regards

Ole



Re: dcut cancel|rm command

2016-10-18 Thread Ole Streicher
Hi,

On 18.10.2016 18:17, Gianfranco Costamagna wrote:
>> dcut cancel -f wsclean_1.12-3_source.changes 
>>
>> but this returns a mail "No upload found:
>> wsclean_1.12-3_source.changes".
> 
> the package has already been accepted?

It isn't; at least I didn't get the acceptance mail and it is not in the
"news" section of the tracker.

> it is not on the UploadQueue anymore
> ftp://ftp.upload.debian.org/pub/UploadQueue/
> 
> https://lists.alioth.debian.org/pipermail/debian-astro-maintainers/Week-of-Mon-20161017/003863.html

That is just the upload message. An acceptance message is still missing.

> dak is slow today, due to something that broke it.
> 
> TLTR; too late!

Hmm, what is this additional step between upload and acceptance? Can I
cancel it there?

Best

Ole




dcut cancel|rm command

2016-10-18 Thread Ole Streicher
Hi,

I am (again) stuck with using the dcut tool to remove an upload.

I have two packages which are uploaded, but still not accepted (in the
incoming queue), which I want to cancel: wsclean and aoflagger.

I thought the way to do would be (with dcut-ng):

dcut cancel -f wsclean_1.12-3_source.changes 

but this returns a mail "No upload found:
wsclean_1.12-3_source.changes".

The other thing I tested was

dcut rm --searchdirs -f wsclean_1.12-3_source.changes 

but this also just returns a mail stating that the files (extracted from
the changes file) are not there.

What do I wrong here?

Best regards

Ole



Re: Version comparison with "+repack"

2016-10-07 Thread Ole Streicher
Adam Borowski  writes:
> On Fri, Oct 07, 2016 at 11:23:45AM +, Mattia Rizzolo wrote:
>> agreed.  What about
>> 7.5~rc.2+repack
>> ?
>> 
>> The full stop is ugly at my eyes, but does the work and there are worse
>> things in the world.
>
> What about rc-2+repack?  A matter of taste but I'd call this somewhat less
> ugly.

I already uploaded...

FYI, the bug is https://bugs.debian.org/840002

Best regards

Ole



Re: Version comparison with "+repack"

2016-10-07 Thread Ole Streicher
Mattia Rizzolo <mat...@debian.org> writes:
> On Fri, Oct 07, 2016 at 12:26:09PM +0200, Ole Streicher wrote:
>> Santiago Vila <sanv...@unex.es> writes:
>> > On Fri, Oct 07, 2016 at 11:00:17AM +0200, Ole Streicher wrote:
>> >> dpkg --compare-versions 7.5~rc+repack lt 7.5~rc2+repack && echo
>> >> lt || echo ge
>> >> ge
>> >> 
>> >> What is the best way to fix this?
>> >
>> > The best way I don't know, but I would put the RC number at the end,
>> > i.e. 7.5~rc+repack2 for RC2, 7.5~rc+repack3 for RC3 and so on.
>> 
>> IMO this is not a good idea since it suggests that we now have the
>> second repack of the RC.
>
> agreed.  What about
> 7.5~rc.2+repack
> ?
>
> The full stop is ugly at my eyes, but does the work and there are worse
> things in the world.

This is probably the best compromise; hopefully an RC will not remain
there for long anyway.

I am however wondering about this case since it shows a descripancy
between the common usage of "+" (being the delimiter between original
upstream version and debian specific removals) and the ordering
procedure... I will probably file a bug against the Debain Policy :-)

Best regards

Ole



Re: Version comparison with "+repack"

2016-10-07 Thread Ole Streicher
Santiago Vila <sanv...@unex.es> writes:
> On Fri, Oct 07, 2016 at 11:00:17AM +0200, Ole Streicher wrote:
>> dpkg --compare-versions 7.5~rc+repack  lt 7.5~rc2+repack && echo lt || echo 
>> ge
>> ge
>> 
>> What is the best way to fix this?
>
> The best way I don't know, but I would put the RC number at the end,
> i.e. 7.5~rc+repack2 for RC2, 7.5~rc+repack3 for RC3 and so on.

IMO this is not a good idea since it suggests that we now have the
second repack of the RC.

> The problem is that it's too late to call "7.5~rc+repack" as "7.5~rc1+repack".

In any case, I would like to stay as close to the original versioning as
possible, so that the users can easily see where it originates from.

Cheers

Ole



Version comparison with "+repack"

2016-10-07 Thread Ole Streicher
Hi,

my package "saods9" has currently a RC release in experimental that is
named

7.5~rc+repack-1

Now, upstream released a second RC which I want to upload as well:

7.5~rc2+repack-1

However, it turns out that this release is actually *smaller* than the
first RC release? I originally thought that the "+" is chosen because it
will not interfere with the upstream versioning?

dpkg --compare-versions 7.5~rc+repack  lt 7.5~rc2+repack && echo lt || echo ge
ge

What is the best way to fix this?

Best regards

Ole



Re: NFS_SUPER_MAGIC portability

2016-09-25 Thread Ole Streicher
Christian Seiler  writes:
> Therefore, it might be a good idea to know _why_ you want to check for
> NFS here? What's the use case? Perhaps there's a better and more
> portable way to check for that specific thing.

That was the key question ;-)

Scanning the code showed me that the function is actually completely
unused and may be just removed. It is probably just a remnant from some
earlier universe (the code contains a huge legacy codebase).

Thank you!
Best regards

Ole



NFS_SUPER_MAGIC portability

2016-09-25 Thread Ole Streicher
Hi,

I have the problem that in a package (casacore) there is basically the
following code:

-8<
#include 
#include 

Bool Directory::isNFSMounted() const
{
   struct statfs buf;
   if (statfs (itsFile.path().expandedName().chars(), ) < 0) {
  throw (AipsError ("Directory::isNFSMounted error on " +
itsFile.path().expandedName() +
": " + strerror(errno)));
   }
   return buf.f_type == NFS_SUPER_MAGIC;
}
-8<

The linux include subdir is obviously only available on Linux archs, not
on kfreebsd or hurd. From the "statfs" manpage, I had the impression
that the second include is just not needed; however then NFS_SUPER_MAGIC
is not available.

So how do I do this portable (so that I could forward it to upstream as
well)?

Best regards

Ole



Re: uscan for a single text file

2016-07-19 Thread Ole Streicher
Sergio Durigan Junior <sergi...@sergiodj.net> writes:
> On Sunday, July 17 2016, Ole Streicher wrote:
>> If mk-origtargz doesn't repack it, why does it look into it? The symlink
>> could be created without as well.
>
> It makes sure that there is a tarball compressed using the supported
> compression mechanisms, even when it is not interested in unpacking the
> tarball.  This is a design decision, and I don't know why it was made
> this way.  I obviously agree with you that it could be improved.

it is done at the wrong place, since mk-origtargz is called before
uupdate, and uupdate is still able to change the tarball (this is even
recommended by some tutorials).

> I agree with Paul Wise when he says that mk-origtargz should create a
> tarball if the file provided by the user is not one.  I guess I'll give
> this idea a try later (when I have time).

IMO there should be an option that the user can decide how the tarball
is created from the downloaded file(s).

>> What is the use of uupdate in current workflows (f.e. git-buildpackage)
>> at all? In my opinion, it is bound to one very specific workflow, which
>> at least I personally never used. And the rest of the watch/uscan
>> procedure is workflow-agnostic; it is just the canonical way to get a
>> new upstream tarball.
>
> uupdate not only creates the symlink, but also does some "house-keeping"
> (it makes sure debian/rules is executable, for example).

that sounds a bit arbitrary: uscan is made for the preparation of the
original tarball. Why should it touch d/rules? Especially, since usually
uscan is called for a package that has already an older version (and
therefore also a working d/rules). If there is really a problem with
wrong permissions on d/rules, it would (IMO) be better to let lintian
complain here.

> It will perform different tasks depending on the version you specify
> in your watch file.

I mean 3/4 only (everything else was before my time).

uscan downloads the file
mk-origtargz repacks it (if needed) or created the symlink

what other housekeeping is needed to reach its goal? The manpages
states:

| uscan invokes uupdate to create the Debianized source tree:

but isn't this a step that is highly dependent on the used workflow? If
I use git-buildpackage (or any other bit-based workflow); why whould I
need to create a debianized source tree?

>> So wouldn'it it be better to just replace uupdate by an adjusted
>> mk-origtargz script? Then, one could replace it by an specific script
>> if needed.
>
> Actually, I think it makes more sense to extend mk-origtargz and make it
> honour its name: create an .orig.tar.gz tarball *even* when upstream
> does not provide a tarball.

I mean: it should be done at the place of uupdate, and easily
replaceable by another script if needed. The name does not matter here.

>> BTW, in the queue of casacore-data packages we would also need a watch
>> file + script for packages which download ~100 individual files and put
>> them into a tarball (Upstream doesn't offer a tar download option). Any
>> good ideas here?
>
> Hm, I couldn't find any casacore-data package.  But I found the casacore
> package, which points me to <https://github.com/casacore/casacore> as
> its upstream.  There, I could find the .tar.gz file provided by git
> tags, so I'm not sure if I understood your question, sorry.  Could you
> expand it to me, please?

Sure: casacore is the "base" package, which is now in Debian. For a
proper work, it needs some data files, which are currently "somehow"
created upstream and put together into a single tarball. Just using this
tarball is IMO not acceptable by the Debian policy, since we insist in
having sources. So, I pushed the casacore maintainers to make the
tarball creation transparent and to do it when creating the
casacore-data package. It appears that the tarball is created by
downloading several ASCII data files from different location (f.e. about
the earth magnetic field), and then processed by a program distributed
with casacore.

Therefore my proposed way is:

* create one source package for each topic/download location
* do the processing when creating the binary packages.

The simplest case here is the one which was used to start this
discussion: the source is just one data file for the earth magnetic
field. The other source packages will be more difficult, since they
download several (up to 100) files; probably the uscan mechanism will
fail here (also because the file names do not contain a version or so).

We discussed that a while ago in the debian-astro mailing list.

Best regards

Ole



Re: uscan for a single text file

2016-07-17 Thread Ole Streicher
Paul Wise <p...@debian.org> writes:
> On Fri, Jul 15, 2016 at 11:13 PM, Ole Streicher wrote:
>
>> I want to create a watch file for a package that contains a single text
>> file (which itself has the version into it):
> ...
>> The "repackage.sh" script would then just create a tarball:
>
> It seems like the right thing for uscan to do here would be to detect
> the format of the downloaded file and pack it into a tarball if it
> isn't one of the known ones.

I think that the case "one text file source" is quite rare. More often
we will meet "one unusual package format" case in future -- like some
new shiny packing format that is still unsupported by uscan. Therefore,
it would be better to have a more flexible way to invoke a repacking
script instead of mk-origtargz.

The replacement of the "uupdate" at the end of the watch line was a good
solution here -- why is this boycotted now?



Re: uscan for a single text file

2016-07-17 Thread Ole Streicher
Sergio Durigan Junior <sergi...@sergiodj.net> writes:
> On Saturday, July 16 2016, Ole Streicher wrote:
>
>> Sergio Durigan Junior <sergi...@sergiodj.net> writes:
>>>> What is wrong here? I thought that mk-orig.tar.gz should be called only
>>>> when it is a tar archive?
>>>
>>> Yeah, uscan is the responsible for invoking mk-origtargz.  That can be a
>>> problem indeed for cases like yours.
>>
>> Hmm, the manpage of uscan says:
>>
>> | Please note the repacking of the upstream tarballs by mk-origtargz
>> | happens only if one of the following conditions is satisfied:
>> |  · USCAN_REPACK is set in the devscript configuration.
>> |  · --repack is set on the commandline.
>> |  · repack is set in the watch line as opts="repack,...".
>> |  · The upstream archive is of zip type including jar, xpi, ...
>> |  · Files-Excluded or Files-Excluded-component stanzas are set in
>> |debian/copyright to make mk-origtargz invoked from uscan remove 
>> |files from the upstream tarball and repack it.
>>
>> Non of these is true in my case. So, isn't this a bug in uscan?
>
> This snippet refers to the repacking of the upstream tarball.  Even when
> no repacking is needed/requested, mk-origtargz is still invoked (it is
> resposible for creating the symlink to the .orig.tar.gz file, for
> example).

If mk-origtargz doesn't repack it, why does it look into it? The symlink
could be created without as well.

> yeah, as I mentioned to Gianfranco I also think it is worth adding an
> option to disable the execution of mk-origtargz.  I went ahead and
> submitted the following:
>
>   <https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=831521>

Great. Thanks.

> Yeah, it is curious that mk-origtargz and uupdate both create the same
> symlink from the original tarball to the .orig tarball.

What is the use of uupdate in current workflows (f.e. git-buildpackage)
at all? In my opinion, it is bound to one very specific workflow, which
at least I personally never used. And the rest of the watch/uscan
procedure is workflow-agnostic; it is just the canonical way to get a
new upstream tarball.

So wouldn'it it be better to just replace uupdate by an adjusted
mk-origtargz script? Then, one could replace it by an specific script if needed.

BTW, in the queue of casacore-data packages we would also need a watch
file + script for packages which download ~100 individual files and put
them into a tarball (Upstream doesn't offer a tar download option). Any
good ideas here?

Best regards

Ole



Re: uscan for a single text file

2016-07-16 Thread Ole Streicher
Sergio Durigan Junior  writes:
>> What is wrong here? I thought that mk-orig.tar.gz should be called only
>> when it is a tar archive?
>
> Yeah, uscan is the responsible for invoking mk-origtargz.  That can be a
> problem indeed for cases like yours.

Hmm, the manpage of uscan says:

| Please note the repacking of the upstream tarballs by mk-origtargz
| happens only if one of the following conditions is satisfied:
|  · USCAN_REPACK is set in the devscript configuration.
|  · --repack is set on the commandline.
|  · repack is set in the watch line as opts="repack,...".
|  · The upstream archive is of zip type including jar, xpi, ...
|  · Files-Excluded or Files-Excluded-component stanzas are set in
|debian/copyright to make mk-origtargz invoked from uscan remove 
|files from the upstream tarball and repack it.

Non of these is true in my case. So, isn't this a bug in uscan?

> Here's a hacky solution.  First, in order to avoid calling mk-origtargz
> you need to pass the --no-symlink option to uscan (or set the
> USCAN_SYMLINK environment variable to "no").  That is unfortunately the
> only way, and there is also no opts available that you can use inside
> the watch file.

wouldn't it be worth to add an option to uscan "opts=norepack"?

> Also, I found a few problems with your repackaging script.  uupdate will
> expect a certain pattern when decompressing it, like a directory named
> package-version/, so you need to create that as well.  Attached on this
> message is an updated script that seems to work (as far as I have
> tested; I don't have the full package here).

Yea, mine was a quick hack. In principle, I don't see a reason for this
at all. Usually, the package is just downloaded and then processed
further by other tools (gbp import-orig) or untarred manually.

I still don't get the reason for the other stuff that uupdate does. But
thanks for the script; I'll use it ;-)

Best regards

Ole



uscan for a single text file

2016-07-15 Thread Ole Streicher
Hi,

I want to create a watch file for a package that contains a single text
file (which itself has the version into it):

--8
version=3
http://www.ngdc.noaa.gov/IAGA/vmod/igrf.html igrf(\d+)coeffs.txt \
 debian debian/repackage.sh
--8

The "repackage.sh" script would then just create a tarball:

--8
#!/bin/sh
set -e

VERSION=$2
BASEDIR=$(dirname $3)

cd $BASEDIR
tar cf igrf-coefficients_$VERSION.orig.tar igrf${VERSION}coeffs.txt
rm -f igrf${VERSION}coeffs.txt
xz igrf-coefficients_$VERSION.orig.tar
exec uupdate --no-symlink --upstream-version ${VERSION} 
igrf_${VERSION}.orig.tar.xz
--8

However, when I run this, uscan complaints:

--8
$ uscan 
uscan: Newest version of igrf-coefficients on remote site is 12, local version 
is 10
uscan:=> Newer package available from
  http://www.ngdc.noaa.gov/IAGA/vmod/igrf12coeffs.txt
Parameter ../igrf12coeffs.txt does not look like a tar archive or a zip file. 
at /usr/bin/mk-origtargz line 375.
uscan: error: mk-origtargz --package igrf-coefficients --version 12 
--compression gzip --directory .. --copyright-file debian/copyright 
../igrf12coeffs.txt gave error exit status 255
--8

--debug does not show more here.
What is wrong here? I thought that mk-orig.tar.gz should be called only
when it is a tar archive?

Best

Ole



Weird unmet build dependencies on buildds

2016-07-05 Thread Ole Streicher
Hi,

I am trying to get my package "dpuser" compiled. While this works nicely
on my local pbuilder with up-to-date packages, it fails on the buildds:

https://buildd.debian.org/status/package.php?p=dpuser

f.e. for i386:

---8<-
Dependency installability problem for dpuser on i386:

dpuser build-depends on:
- i386:libvtk6-dev
i386:libvtk6-dev depends on:
- i386:python-vtk6 (= 6.3.0+dfsg1-1)
i386:python-vtk6 depends on:
- i386:python-twisted
i386:python-twisted depends on:
- i386:python-twisted-core (>= 16.2.0-1)
i386:python-twisted-core depends on:
- i386:python-openssl
i386:python-openssl depends on:
- i386:python-cryptography (>= 1.3)
i386:python-cryptography depends on missing:
- i386:python-cffi-backend-api-min (<= 9729)
---8<-

I could somehow not trace this dependency chain; locally everything
works well.

What could cause this problem and how should one solve it?

Best regards

Ole



Deterministic "ar" breaks build

2016-06-19 Thread Ole Streicher
Hi,

since a while, the "ar" command is built with --enable-deterministic-archives,
which basiacally resets (among others) the timestamp to null. This has
the unfortune disadvantage, that Makefiles that use this timestamp do
not work correctly anymore. For example, my wcslib package has

$(WCSLIB)(%.o) : %.c
-@ echo ''
   $(CC) $(CPPFLAGS) $(CFLAGS) -c $<
   $(AR) Ur $(WCSLIB) $%
-@ $(RM) $%

which results in a repeated compilation of all files every time when a
dependency of the wcslib appears.

I could use the "U" flag or ar, but is there a way to get around this
*and* keep determinism at the same time? I don't want to rewrite the
whole Makefile logic here, however.

Best regards

Ole



Re: Best practise for a drop-in replacement of a non-free package

2016-05-23 Thread Ole Streicher
Paul Wise <p...@debian.org> writes:
> On Mon, May 23, 2016 at 12:11 AM, Ole Streicher wrote:
>
>> My question is more the first step , which includes giza as a library
>> with the /potential/ to replace pgplot, but without a strong
>> recommendation.
>
> Since the SONAMEs are different, you probably can just ship it as is,
> without renaming the libraries. Only thing that would be needed is the
> Conflicts between the -dev package and the pgplot equivalent.
>
> Upstream will need to be aware that SONAME 5 is for the non-free
> version and they should not use it until they are binary-compatible
> with it.

OK, this is the simplest solution ofcourse. As I said, I don't have
experiences with upstream, however.

Best regards

Ole



Re: Best practise for a drop-in replacement of a non-free package

2016-05-22 Thread Ole Streicher
Hi Paul,

Paul Wise <p...@debian.org> writes:
> On Sat, May 21, 2016 at 8:29 PM, Ole Streicher wrote:
>> I intend to package the "giza" library [1] that is largely a replacement
>> of the pgplot library that is in Debian non-free.
>
> Does that mean that pgplot5 can be removed from Debian and the reverse
> dependencies transitioned to giza and moved to Debian main? I would
> suggest that is the best way to go if it is possible.

Giza has still some functions unimplemented [1], and I have no
experiences with it: neither with the library itself, nor with
upstream. Therefore, I would leave this decision to the individual
package maintainers -- once Giza is in, you may write wishlist bugs for
them if you like :-)

My question is more the first step , which includes giza as a library
with the /potential/ to replace pgplot, but without a strong
recommendation.

Best regards

Ole

[1] http://giza.sourceforge.net/documentation/pgplot.shtml



Best practise for a drop-in replacement of a non-free package

2016-05-21 Thread Ole Streicher
Hi all,

I intend to package the "giza" library [1] that is largely a replacement
of the pgplot library that is in Debian non-free. Pgplot is an
all-in-one package that contains among others the following libraries
and header file (no pkgconfig file here):

/usr/include/cpgplot.h
/usr/lib/libcpgplot.a
/usr/lib/libcpgplot.so
/usr/lib/libcpgplot.so.5
/usr/lib/libcpgplot.so.5.2.2

(also libpgplot.* which follows the same scheme; everything below also
applies to libpgplot)

The giza package originally builds the following libraries, header file,
and pkgconfig file:

/usr/include/cpgplot.h
/usr/lib//libgiza.a
/usr/lib//libgiza.so
/usr/lib//libgiza.so.0
/usr/lib//libgiza.so.0.1.4
/usr/lib//libcpgplot.a
/usr/lib//libcpgplot.so
/usr/lib//libcpgplot.so.0
/usr/lib//libcpgplot.so.0.0.0
/usr/lib//pkgconfig/cpgplot.pc

My question is now how to create the packages. As far as I understand
the policy, there are no rules wrt. conflicts to non-free, right? I
would now just rename the libcpgplot.* to libcpgplot_giza.*, and then
create the following packages:

* giza-dev (Conflicts: pgplot)
/usr/include/cpgplot.h
/usr/lib//libgiza.a
/usr/lib//libgiza.so
/usr/lib//libcpgplot_giza.a
/usr/lib//libcpgplot_giza.so
/usr/lib//pkgconfig/cpgplot.pc

* libgiza0
/usr/lib//libgiza.so.0
/usr/lib//libgiza.so.0.1.4

* libcpgplot-giza0
/usr/lib//libcpgplot_giza.so.0
/usr/lib//libcpgplot_giza.so.0.0.0

This way, programs *using* the non-free pgplot could coexist with
programs using the giza implementation, but not the development
version. Packages that need pgplot for compilation and using pkgconfig
can just depend on giza-dev; if they don't use pkgconfig the linker flag
should be changed to -lcpgplot_giza. However, giza-dev and pgplot cannot
coexist. 

Is this the optimal solution or shall I change something?

If you want to review the package, see [2]

Best regards

Ole

[1] https://bugs.debian.org/649602
[2] https://anonscm.debian.org/cgit/debian-astro/packages/giza.git



Which libstdc++ library?

2016-05-17 Thread Ole Streicher
Hi,

I have a package (dpuser [1]), that during execution may call the c++
compiler g++ for some on-the-fly-generated C++ files that use the
standard C++ library.

I am now curious on how I need to specify the runtime dependency from the
-dev library? The C++ compiler is probably just the "g++" package, but how do
I specify the corresponding stdc++ lib?  Using libstdc++-5-dev is not
nice since it will break if the default gcc switches to version 6 (and
also if on some backport version 4 is required). Or is the correct
libstdc++-dev package automatically installed with g++?

Best regards

Ole

[1] http://anonscm.debian.org/cgit/debian-astro/packages/dpuser.git/



Re: Package Naming

2016-05-16 Thread Ole Streicher
Hi Benda,

Benda Xu  writes:
> I am packaging a library called "casacore" which provides
>
>   libcasa_python3.so.2 and libcasa_python.so.2
>
> with SONAME=2.
>
> How should them be named when the python major version and SONAME could
> cause confusion?

Since noone had a good idea here, I would propose to rename the Python
2 variant of the library to libcasa_python2.so.2. Then the package names 
can be formally created:

libcasa-python32 for the Python 3 variant, and 
libcasa-python22 for the Python 2 variant.

One could also be lazy, just keep the library names as they are, and
use the formally created names:

libcasa-python32 for the Python 3 variant, and 
libcasa-python2  for the Python 2 variant.

Cheers

Ole



Re: Package Naming

2016-05-11 Thread Ole Streicher
Christian Kastner  writes:
> On 2016-05-11 03:41, Benda Xu wrote:
>> I am packaging a library called "casacore" which provides
>>   libcasa_python3.so.2 and libcasa_python.so.2
>> with SONAME=2.
>> How should them be named when the python major version and SONAME could
>> cause confusion?

> According to the Debian Python Policy [1] and assuming from [2] that the
> module name is "casacore"

> [2] http://casacore.github.io/python-casacore/

That is another package. As far as I understand it, the libraries
mentioned by Benda are just some data type converters; they don't
contain an importable module. The package in [2] is built on top of this
library (but a package that is built independently), and this should
ofcourse get the name given in the policy.

The library Benda speaks about is not covered by the Python Policy, but
a normal shared library that should be covered by the standard Debian
Policy.

Cheers

Ole



Re: Please help creating shared *and* static library with cmake

2016-04-08 Thread Ole Streicher
Andreas Tille  writes:
> I need to package libsdsl[1] as some precondition for a Debian Med
> package.  The default cmake build only creates a static library and I
> found a patch to create a shared library.  But since library packages
> should include both I wonder how to get both shared and static library
> without doing to much tricky things.

For the common case, I see no real reason to include both; having just
the dynamic libs and the header is enough, IMO.

The only use case I could imagine is to create an executable that can
run outside of Debian.

Best

Ole



Re: Bug#810822: ITP: MooseFS

2016-01-18 Thread Ole Streicher
Jakub Kruszona-Zawadzki  writes:
> On 15 Jan, 2016, at 15:09, Dmitry Smirnov  wrote:
>> For quite a while LizardFS is developed with community using public
>> VCS and bug tracker (GitHub) as well as Gerrit code review system and
>> continuous integration system. LizardFS have more development
>> transparency than MooseFS ever had.

> And in case of file system it is not good idea. 

Could you explain that a little bit more please?

Best regards

Ole



Re: cowbuilder/pbuilder: require newer version from experimental

2015-10-27 Thread Ole Streicher
Alex Mestiashvili  writes:
>> What could be the cause that the dependency is not satisfied from
>> experimental here?

> may be a "cowbuilder --update" is missing ?

No; I did this. I also tested that I can install it manually (with
"apt-get python-astropy-helpers=1.1~b1-1").

Best regards

Ole



cowbuilder/pbuilder: require newer version from experimental

2015-10-27 Thread Ole Streicher
Hi,

to build an "experimental" version of one of my packages, I need to
specify a package that is in unstable (1.0.5-1) and in experimental
(1.1~b1-1), and I need the experimental version here.

With "cowbuilder --login --save-after-login", I have put the
"experimental" distribution into /etc/apt/sources.list:

(chroot) # cat /etc/apt/sources.list
deb http://ftp.de.debian.org/debian/ sid main
deb http://ftp.de.debian.org/debian experimental main

and I did "sudo cowbuilder --update" afterwards. When I then do a 
"cowbuilder --login", I can install the needed version manually:

(chroot) # apt-get install python-astropy-helpers=1.1~b1-1
Reading package lists... Done
Building dependency tree   
Reading state information... Done
The following extra packages will be installed: [...]

However, when I now try to use this to build my package, it does not
work:

$ pdebuild
[...]
This package is uninstallable
Dependency is not satisfiable: python-astropy-helpers (>= 1.1~)
[...]

The package I am trying to build is python-astropy, from the alioth git:

http://anonscm.debian.org/cgit/debian-astro/packages/python-astropy.git/tree/?h=experimental

What could be the cause that the dependency is not satisfied from
experimental here?

Best regards

Ole



Re: Setting up CI environment

2015-10-21 Thread Ole Streicher
Tomasz Buchert <tom...@debian.org> writes:
> On 21/10/15 10:34, Ole Streicher wrote:
>> I was just trying to setup the debci environment, following the
>> documentation
> FTR, https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=799760

Thanks, that hint from the bug discussion helped:

union-type=overlay

Cheers

Ole



Setting up CI environment

2015-10-21 Thread Ole Streicher
Hi,

I was just trying to setup the debci environment, following the
documentation

https://ci.debian.net/doc/

However, when I run the command

$ adt-run --user debci --output-dir /tmp/debci-output \
  /var/cache/pbuilder/result/aplpy_1.0-1_amd64.changes \
  --- schroot debci-unstable-amd64

I get the following output:

adt-run [10:31:47]: version @version@
adt-run [10:31:47]: host donar; command line: /usr/bin/adt-run --user debci 
--output-dir /tmp/debci-output 
/var/cache/pbuilder/result/aplpy_1.0-1_amd64.changes --- schroot 
debci-unstable-amd64
: failure: ['schroot', '--quiet', '--begin-session', '--chroot', 
'debci-unstable-amd64'] failed (exit status 1)
adt-run [10:31:47]: ERROR: testbed failure: cannot send to testbed: 
['BrokenPipeError: [Errno 32] Broken pipe\n']

I then tried to run the schroot locally:

$ schroot -c debci-unstable-amd64
E: 10mount: mount: unknown filesystem type 'aufs'
E: debci-unstable-amd64-c1087f9b-957d-496a-98cc-58bf2ed3b0e1: Chroot setup 
failed: stage=setup-start

and finally found that my kernel (from sid, Linux donar 4.2.0-1-amd64 #1
SMP Debian 4.2.3-2 (2015-10-14) x86_64 GNU/Linux) is has no aufs:

$ grep aufs /proc/filesystems 

$ find /lib/modules -name aufs\*

$

Why is aufs gone, and what could I do for replacement?

Best regards

Ole



Endianess testing?

2015-09-26 Thread Ole Streicher
Hi,

how do I test the endianess of a system for the inclusion in
debian/rules?

lscpu | fgrep -q "Little Endian"

comes in my mind; but is this safe? qemu-userland would probably report
something wrong here?

Best regards

Ole



+dfsg extension with Files-Excluded: in d/copyright

2015-09-01 Thread Ole Streicher
Hi,

when using the Files-Excluded: tag in debian/copyright, in the past
there was an "+dfsg" suffix added to the version number
automatically. This seems to have changed; is there a reason for that?
Is there any case to use Files-Excluded: *without* actually adding the
suffix?

What is recommended way for the watch file that it automatically
generated to correct version number for a newly created orig.tar file?

I tried to add "uversionmangle", but it didn't work well:

version=3
opts=dversionmangle=s/\+dfsg//,uversionmangle=s/$/+dfsg/ \
 http://heasarc.gsfc.nasa.gov/FTP/software/lheasoft/fv/fv(.+\..+)_src\.tar\.gz

now always detects a new version 5.4+dfsg, even if that is already in
debian/changelog.

Best

Ole



Re: +dfsg extension with Files-Excluded: in d/copyright

2015-09-01 Thread Ole Streicher
Sebastiaan Couwenberg <sebas...@xs4all.nl> writes:
> On 01-09-15 11:51, Ole Streicher wrote:
>> What is recommended way for the watch file that it automatically
>> generated to correct version number for a newly created orig.tar file?
> Add the repacksuffix option

Is the empty default value for repacksuffix a good choice here? I cannot
imagine a case where one removes something from the upstream tarball and
does not add a suffix to the version. Specifically, I had the case that
I first (last year) added a few Files-Excluded and created a "+dfsg"
tarball, and when I later update with the new devscripts version, I
forgot to check the suffix and accidently created a package without the
suffix. This could be avoided if either the suffix defaults to +dfsg if
files are excluded, or if uscan fails with some error in this case. Or
Lintian would report this.

Best regards

Ole



Re: Best practices for downloader packages

2015-08-18 Thread Ole Streicher
Adam Borowski kilob...@angband.pl writes:
 On Mon, Aug 17, 2015 at 01:29:12PM +0200, Ole Streicher wrote:
  * Since the download code if DFSG-Free, the downloader goes to
  contrib, independently of the copyright of the data, right?
 
  Right.
 
  which is a bit pity, since the package *is* actually DFSG-free,
 including the downloaded data. The reason that they are not in the
 Debian archive is just a technical, not a license one: The source data
 size is about 8 GB, and the binary packages are from 160 MB to ~13 GB.

 As, unlike most downloaders, it doesn't deal with non-free data, I don't get
 why it would be unacceptable for main.  It's strictly better than clients
 for various proprietary services, which sit in main.

After lunch, someone pointed out that the policy only requires that
packages that download *software* need to go to contrib. Since my one
would download *data* only, it would be probably suitable for main.

Best regards

Ole



Re: Best practices for downloader packages

2015-08-17 Thread Ole Streicher
Jakub Wilk jw...@debian.org writes:
 * Ole Streicher oleb...@debian.org, 2015-08-16, 19:17:
 * Shall it be native? There is no local upstream code, so the
 directory is just empty (except the debian/ subdir). However,
 native may not the best mark to it, since the package ist not
 really a debian-only one (the data may be used elsewhere as well).

 My feeling is that if there's no upstream code, then the package
 should be native.

There may be no upstream stuff in the package itself, but there is
upstream data in the installed package.

 Does upstream make formal versioned releases? That could maybe justify
 non-native version.

More-or-less. I yesterday realized that despite of there are versions,
they are not really consecutive; upstream recommends the older package
in some situations. Therefore, I will create two source packages...

 * Since the download code if DFSG-Free, the downloader goes to
 contrib, independently of the copyright of the data, right?

 Right.

 which is a bit pity, since the package *is* actually DFSG-free,
including the downloaded data. The reason that they are not in the
Debian archive is just a technical, not a license one: The source data
size is about 8 GB, and the binary packages are from 160 MB to ~13 GB.

My feeling here is that this undermines the Philosophy of the DFSG: If
someone wants to have a DFSG compatible system, he still needs to add
contrib to sources.list to get these files.

Best regards

Ole



Best practices for downloader packages

2015-08-16 Thread Ole Streicher
Hi,

I want to create a package that purely downloads some (large scientific)
data, and I am unsure how to create the package:

* Shall it be native? There is no local upstream code, so the
  directory is just empty (except the debian/ subdir). However, native
  may not the best mark to it, since the package ist not really a
  debian-only one (the data may be used elsewhere as well).

  However, when it is not native, I must create a dummy/empty
  .orig.tar.gz, right?

* Shall I use the word downloader in the package name? That would make
  the package name longer: In the moment, the packages would have names
  like astrometry-data-4208-4219; putting a downloader would make
  them even longer.

* Do I need to specify the copyright of the *downloaded* files? If yes,
  where? debian/copyright is just for the source, not for the result...

* Since the download code if DFSG-Free, the downloader goes to contrib,
  independently of the copyright of the data, right?
  
Best regards

Ole



  1   2   3   >