Bug#883628: ITP: ioport -- direct access to I/O ports from the command line

2018-01-05 Thread Christian Seiler

On 01/05/2018 03:30 PM, Lubomir Rintel wrote:

On Fri, 5 Jan 2018 12:16:10 +0100 Tobias Frost  wrote:

You keep the Debian revision the same until it is sponsored.


That's what I initially meant to do, but mentors.debian.org won't let
me upload a changed package with the same version number.


You can remove the old package on Mentors via the web interface and
then upload the package in that version again with different contents.

Note that in my experience reuploading an existing version to Mentors
worked when done via FTP, but not via HTTP. (But that may be outdated
information, I haven't tried that in forever.)

Regards,
Christian



Re: How to determine the filename for dlopen()

2017-11-29 Thread Christian Seiler

Am 2017-11-26 15:26, schrieb wf...@niif.hu:

At least I can't see any other
way to express alternative groups of library dependencies like ((libnss
and libnspr) or libssl), which would be needed for crypto plugins.


Well, if a software wants to support alternatives, then the following
would work quite well:

 - Software has an internal abstraction layer for these libraries.
   (It will need that anyway.)
 - Any integration with any of these libraries is done in plugins
   for that specific software (which are dlopen()d). The plugins
   themselves expose only the abstraction layer, but are in turn
   linked against the actual libraries.

Since the internal plugin interface between the software and the
various plugins for different libraries is something that the
authors of the software themselves control, there's never going
to be an issue there, you upgrade them in lock-step and everything
just works.

And since the plugin libraries themselves are directly linked
against the actual libraries, automatic dependency generation will
just work, as well as symbol versioning.

In Debian packaging you'd ideally want to separate out each
alternative into their own package, so that the main package doesn't
need to depend on all alternatives.

Regards,
Christian



Re: CMake help needed to enable hdf5 for gatb-core (Was: [MoM] Re: gatb-core packaging)

2017-11-28 Thread Christian Seiler
Hi Andreas,

On 11/28/2017 11:52 AM, Andreas Tille wrote:
> it turned out hat the cmake issue is a bit tricky for a MoM project so I
> gave it a try myself.  The current state of gatb-core packaging is in
> Git[1].  I went as far as my poor cmake knowledge permits to replace the
> cmake hdf5 code to use the Debian packaged code after the internal code
> copy was removed.  Unfortunately I failed to get the proper -I options
> propagated to the compiler call since I'm ending up with:

Problem is that the system-wide hdf5.h  is always directly in the include
path (#include , wherease the embedded code copy of the project
you're trying to use was somehow put into the source project in such a
way that they used #include  (see the error message). And
since that _adds_ a directory layer, there's no -I flag you can pass that
will make this work out of the box.

So you'll definitely need to patch the source files and replace
#include  with #include 

Then you also have the problem that your compile line doesn't include
the HDF5 directories. I haven't looked at your packaging, but in
general you need to have the following in CMake to link against HDF5:

find_package(HDF5 REQUIRED)
include_directories(${HDF5_INCLUDE_DIRS})
target_link_libraries(name_of_program_or_library ${HDF5_LIBRARIES})

The last line possibly multiple times for multiple targets.

Regards,
Christian



Re: How to determine the filename for dlopen()

2017-11-13 Thread Christian Seiler

Hi,

Am 2017-11-13 13:23, schrieb wf...@niif.hu:

I'm packaging a program which wants to dlopen() some library.  It finds
this library via pkg-config (PKG_CHECK_MODULES).  How to best determine
the filename to use in the dlopen() call?  It should work cross-distro,
for cross-compilation and whatnot.  Is it always safe to use the SONAME
as the filename?


The SONAME is the right thing to do here, as that is what's encoded in
the DT_NEEDED field by the linker.


I'm currently considering something like

ld -shared -o dummy.so $(my_LIBS)
objdump -p dummy.so | fgrep NEEDED


That might work, but I'm not sure that's very stable.

I've created the following example code that works for me with libpng:

cmake_minimum_required(VERSION 3.0 FATAL_ERROR)
project(example)
enable_language(C)

find_library(PNG_LIBRARY_FILE NAMES png)
if(PNG_LIBRARY_FILE)
  execute_process(COMMAND objdump -p "${PNG_LIBRARY_FILE}"
  OUTPUT_VARIABLE PNG_CONTENTS)
  if (PNG_CONTENTS)
string(REGEX MATCH "\n[ \t]*SONAME[ \t]+([^ \t\r\n]*)"
   DUMMY "${PNG_CONTENTS}")
if (DUMMY)
  set(PNG_SONAME "${CMAKE_MATCH_1}" CACHE STRING "The SONAME of the 
PNG library")

  message(STATUS "Got libpng soname: ${PNG_SONAME}")
else()
  message(FATAL_ERROR "Could not extract SONAME from 
${PNG_LIBRARY_FILE}")

endif()
  else()
  message(FATAL_ERROR "Could not run objdump -p on 
${PNG_LIBRARY_FILE}")

  endif()
else()
  message(FATAL_ERROR "Could not find -lpng")
endif()

Important:

 - This assumes that objdump -p actually works. This is basically only
   true if you use the GNU toolchain on ELF systems. If you have another
   platform then you need to call different things:

 - On Mac OS X you need to do otool -D $LIBRARY and then parse that
   (it will give you back something like (notice the newline!)

   $filename:\n$library_id_abspath

   The base name of the second line of that output is what you're
   looking for for dlopen().

 - On other UNIX platforms I don't really know.

 - On Windows with the MinGW toolchain, when you have an import
   library (*.dll.a MinGW style or *.lib Microsoft style) then you
   may use dlltool -I $IMPORTLIB to extract the DLL name from
   that. However, MinGW does support linking directly against DLLs
   in some cases (when they were also compiled with MinGW, for
   example, and fulfill some additional criteria), and it may be
   the case that your linker finds the DLL directly if no import
   library is found, in which case dlltool -I will fail, but you
   can just use the basename of the DLL.

   Note that objdump -p does work on MinGW on Windows, but doesn't
   give you a SONAME. (It does mention the DLL name multiple times,
   but I'm not sure that's easy to parse.)

 - On Windows with MSVC I have no idea how to get the DLL name
   from an import library (*.lib), but there's definitely going
   to be a tool you'll be able to use.

 - No idea on yet other operating systems.

 - I'm hacked this together and am not sure this is the sanest way of
   parsing this in CMake... YMMV.

 - CMake might only find a static library depending on how your search
   path is set up (*.a on UNIX systems including Mac OS X, as well as
   on MinGW, but *.lib on Windows system with MSVC). On Windows the
   fact that import libraries and static libraries share the same
   extension actually makes it quite difficult to handle this case
   properly.

 - If you do manage to write some relatively generic code, I would
   urge you to contribute that to CMake as a macro, so that other
   people could also profit from it.

Regards,
Christian



Re: C help needed for new version of tifffile

2017-10-05 Thread Christian Seiler
Hi Andreas,

On 10/05/2017 09:00 PM, Andreas Tille wrote:
> It seems that the definition of GET_NEXT_CODE is just wrong - but
> what would be correct?

So the code contains the following:

#define GET_NEXT_CODE \
code = *((uint32_t*)((void*)(encoded + (bitcount >> 3; \
if (little_endian) \
code = SWAP4BYTES(code); \
code <<= (uint32_t)(bitcount % 8); \
code &= mask; \
code >>= shr; \
bitcount += bitw; \
static PyObject*

Clearly the static PyObject* should not be part of the macro, but in fact it
is. The other compiler warnings you see are actually a result of that:

> tifffile.c:575:1: warning: return type defaults to 'int' [-Wimplicit-int]
>  py_decodelzw(PyObject* obj, PyObject* args)
>  ^~~~
> tifffile.c: In function 'py_decodelzw':
> tifffile.c:590:16: warning: return makes integer from pointer without a cast 
> [-Wint-conversion]
>  return NULL;
> ^~~~

The py_decodelzw should actually return a PyObject* and not an int, but
the return type is absorbed into the macro.

And the reason why you had the problem is that the getorig.sh script
in the package downloads a pretty-printed version of the C file from
a website and the conversion procedure removes all empty lines. The
upstream source code does have the issue that the trailing \ is in
the last line of the macro - but since the next line is an empty line
anyway it doesn't actually cause any problems upstream.

This appears to be a problem in links (which you use to dump the plain
text) that when a line within  starts with a tag (e.g.
 as in the highlighter used here, it strips out any
empty lines:

$ links -dump /dev/stdin <


Text dump test



Hello
World

A
B
C



EOF

 Hello
 World
 A
 B
 C

While browsers keep them. You should probably report that to the
links people.

You can get the original C file as text/plain by removing the ".html"
from the URL, but that doesn't work for the Python file, so I'm not
sure that that is intentional by the author. You should really ask
the upstream author to provide a proper download URI for the original
source files so that you don't have to do all this weird things.
Maybe suggest that they make those available if you add ".txt"
instead of ".html".

In the mean time: lynx -dump doesn't appear to suffer from this
issue, and it also doesn't add a whitespace to the beginning of every
line, so you could easily just use lynx -dump instead of links -dump
in getorig.sh for now. (And drop the 'sed' there.) But again: I don't
think that converting syntax-highlighted HTML of the original source
back to the original source is the best of ideas, it would be much
better if the original source were directly available in text form.

Hope that helps...

Regards,
Christian



Re: fseeko() on reference file: Invalid argument (Was: Bug#876840: staden-io-lib FTBFS on non-i386 32bit: FAIL: java)

2017-09-26 Thread Christian Seiler
Hi Andreas,

On 09/26/2017 10:08 PM, Andreas Tille wrote:
> I need to admit I have no idea why
> 
>fseeko() on reference file: Invalid argument
> 
> is happening on some architectures.

According to the manpage of fseek(), which is identical to fseeko()
apart from the offset data type:

ERRORS
   [...]
   EINVAL The whence argument to fseek() was not SEEK_SET,
   SEEK_END, or SEEK_CUR.  Or: the resulting file offset would
   be negative.

I suspect that something is calling fseeko() with a negative offset.

I'd recommend doing an strace on the specific test binary that
fails on a porterbox (e.g. armhf) + on amd64 for comparison and
then look for the offending fseeko() call. That might help isolate
the issue.

Regards,
Christian



Re: Bug#876839: staden-io-lib FTBFS on big endian: error: invalid operands to binary

2017-09-26 Thread Christian Seiler
On 09/26/2017 10:06 PM, Andreas Tille wrote:
>> ...
>> In file included from bgzip.c:56:0:
>> bgzip.c: In function 'gzi_index_dump':
>> ../io_lib/os.h:127:10: error: invalid operands to binary & (have 'uint64_t * 
>> {aka long long unsigned int *}' and 'long long int')
>>  (((x & 0x00ffLL) << 56) + \
>>   ^
>> ../io_lib/os.h:185:20: note: in expansion of macro 'iswap_int8'
>>  #define le_int8(x) iswap_int8((x))
>> ^~
>> bgzip.c:190:16: note: in expansion of macro 'le_int8'
>>  if (fwrite(le_int8(), sizeof(n), 1, idx_f) != 1)
>> ^~~

This code is completely wrong.

le_int8 appears to do a 64 bit byte order swap to adjust the
endianness of a quantity. What bgzip.c does at this point is the
following (removed if() for clarity):

uint64_t n = idx->n;
fwrite(le_int8(), sizeof(n), 1, idx_f);

 is the pointer to the 'n' variable, but you really don't want
to byte swap a pointer to a local variable before passing it to
a function that reads that pointer (also note that the pointer
could be 32 bit on 32 bit systems!).

What you presumably want to do is byte swap the _contents_ of the
pointer (especially since n is a 64 bit integer). If you look at
the read code in the same file this is actually what happens:

if (8 != fread(, 1, 8, fp))
goto err;
n = le_int8(n);

So what you'd rather want to have is the following code:

uint64_t n = le_int8(idx->n);
fwrite(, sizeof(n), 1, idx_f);

Or, if I adjust the entirety of that section in the original write
code:

int i;
uint64_t n = le_int8(idx->n);
if (fwrite(), sizeof(n), 1, idx_f) != 1)
goto fail;
for (i=0; in; i++) {
uint64_t nn;
nn = le_int8(idx->c_off[i]);
if (fwrite(, sizeof(nn), 1, idx_f) != 1)
goto fail;
nn = le_int8(idx->u_off[i]);
if (fwrite(, sizeof(nn), 1, idx_f) != 1)
goto fail;
}

That should fix the compiler error you're seeing.

The only reason that doesn't fail on Little Endian is because the
le_int8(x) function is a no-op on those systems and just passes
through the original pointer.

Regards,
Christian



Re: Is there any sensible way to know what qt5 devel packages might be needed as build-depends

2017-09-26 Thread Christian Seiler

Hi Andreas,

Am 2017-09-26 17:41, schrieb Andreas Tille:

I try to port clonalframe[1] to Qt5 and I somehow wild-guessed what
Build-Depends might be needed.  Anyway I got

  qmake: could not find a Qt installation of ''

I've got this totally unhelpful message in another package - what
is a sensible approach to find the needed packages?


Since the project is using qmake as it's build system, you could
try:

find . -name "*.pro" -o -name "*.pri" -print0 | \
   xargs -0 grep -A 3 -E "QT.*="

to find all the lines of the type

QT += core gui widgets

and the such. Then you could go through those and see if there is
a "libqt5XXX5-dev" package (replace XXX with the module) available.
Some things are in qtbase5-dev directly (such as the 3 examples I
have above) though.

It could also be that some modules of Qt aren't packaged yet at
all. For example - and I may be mistaken about that - I don't
believe there's a libqt5datavisualization5-dev for that module in
Debian yet.

Regards,
Christian



Re: Nanopolish: gcc-7 issue solved, but immintrin.h missing on most architectures

2017-09-18 Thread Christian Seiler
Hi Andreas,

On 09/18/2017 01:54 PM, Andreas Tille wrote:
> Strangely enough on i386 the build fails with
> 
>/usr/bin/ld: cannot find -lhdf5
> 
> which I do not understand as well ...

You add the following to the linker flags:

-L/usr/lib/$(shell dpkg-architecture -qDEB_TARGET_GNU_TYPE)/hdf5/serial

This is wrong on i386: DEB_TARGET_GNU_TYPE expands to i686-linux-gnu,
while Debian uses i386-linux-gnu. Also, DEB_TARGET_* is definitely
wrong unless you are _building_ a cross-compiler. What you want here is
DEB_HOST_MULTIARCH - that will be correct even if you are _using_ a
cross compiler.

Also, if the package requires intrinsics, you should depend on
sse-support on i386 (but not on amd64, where SSE1 is always part of
the base ISA).

Regards,
Christian



Re: Help needed with gcc-7 error

2017-08-28 Thread Christian Seiler
On 08/28/2017 10:54 PM, Martin Eberhard Schauer wrote:
>>>  Well, casting to long helped - but in how far does making
> 
>>>  abs(unsigned - unsigned)
> 
>>>  no sense?  This does not sound very logical to me.
> 
>> The result of (unsigned - unsigned) is unsigned.
> What about A, B both unsigned and B > A?

This will wrap around. Current C/C++ standards define that unsigned
types with N bits always operate mod 2^N. Whereas on the other hand
any overflow / underflow with signed types is undefined behavior.

Note however that this rule only holds true for int or larger. For
smaller types (unsigned short, unsigned char) these are promoted to
int (because it fits in int) when you subtract them.

For example:

#include 
#include 

int main()
{
  unsigned char a = 23, b = 42;
  std::cout << std::abs(a - b) << std::endl;
  return 0;
}

would compile and print out 19 when run, because a - b is promoted
to int and std::abs(int) does exist.

The problem is that std::abs(unsigned) doesn't and unsigned can be
converted to both int (narrowing conversion) or long (widening
conversion), and since both are legal, the compiler doesn't know
which overload to take.

Regards,
Christian

PS: In order to illustrate how integer promotion works in practice,
you can try the following program:

#include 
#include 
#include 
#include 

template
void printType(char const* typeName)
{
  std::cout << std::setw(18) << typeName << ": typeid().name() == \"" << 
typeid(IntType).name() << "\", sizeof() == " << sizeof(IntType) << "\n";
}

#define PRINT_TYPE(typeName) printType(#typeName)

template
void doTest(char const* typeNameA, char const* typeNameB)
{
  IntTypeA a = 1;
  IntTypeB b = 2;
  std::cout << std::setw(18) << typeNameA << " - " << std::setw(18) << 
typeNameB << ": typeid(a - b).name() = \"" << typeid(a - b).name() << "\", a - 
b = " << (a - b) << '\n';
}

#define DO_TEST(typeNameA, typeNameB) doTest(#typeNameA, 
#typeNameB)

#define TEST_HELPER(typeNameA) \
DO_TEST(typeNameA, char); \
DO_TEST(typeNameA, signed char); \
DO_TEST(typeNameA, unsigned char); \
DO_TEST(typeNameA, signed short); \
DO_TEST(typeNameA, unsigned short); \
DO_TEST(typeNameA, signed int); \
DO_TEST(typeNameA, unsigned int); \
DO_TEST(typeNameA, signed long); \
DO_TEST(typeNameA, unsigned long); \
DO_TEST(typeNameA, signed long long); \
DO_TEST(typeNameA, unsigned long long);

int main()
{
PRINT_TYPE(char);
PRINT_TYPE(signed char);
PRINT_TYPE(unsigned char);
PRINT_TYPE(signed short);
PRINT_TYPE(unsigned short);
PRINT_TYPE(signed int);
PRINT_TYPE(unsigned int);
PRINT_TYPE(signed long);
PRINT_TYPE(unsigned long);
PRINT_TYPE(signed long long);
PRINT_TYPE(unsigned long long);

TEST_HELPER(char);
TEST_HELPER(signed char);
TEST_HELPER(unsigned char);
TEST_HELPER(signed short);
TEST_HELPER(unsigned short);
TEST_HELPER(signed int);
TEST_HELPER(unsigned int);
TEST_HELPER(signed long);
TEST_HELPER(unsigned long);
TEST_HELPER(signed long long);
TEST_HELPER(unsigned long long);


return 0;
}



Re: C++ help needed (Was: Bug#853375: disulfinder: ftbfs with GCC-7)

2017-08-26 Thread Christian Seiler
Hi Andreas,

On 08/26/2017 10:08 PM, Andreas Tille wrote:
> I moved disulfinder to Git[1] and tried to track down this issue with my
> limited C++ knowledge but failed.  The issue is
> 
> ...
> make[3]: Entering directory '/build/disulfinder-1.2.11/disulfind/src'
> g++ -Wdate-time -D_FORTIFY_SOURCE=2 -g -O2 
> -fdebug-prefix-map=/build/disulfinder-1.2.11=. -fstack-protector-strong 
> -Wformat -Werror=format-security -DDEFAULT_PKGDATADIR=\"/usr/share/disulfind
> In file included from Input/utils.h:1:0,
>  from Input/GlobalDescriptor.cpp:3:
> Input/../Common/Matrix.h: In constructor 'Matrix::Matrix(int, int, 
> DATATYPE*)':
> Input/../Common/Matrix.h:208:3: error: 'Exception' has not been declared
>Exception::Assert(nrows>0 && ncols>0,"construction of empty matrix");
>^
> Input/../Common/Matrix.h: In constructor 'Matrix::Matrix(int, int, 
> const DATATYPE&)':
> Input/../Common/Matrix.h:221:3: error: 'Exception' has not been declared
>Exception::Assert(nrows>0 && ncols>0,"construction of empty matrix");
>^
> Input/../Common/Matrix.h: In constructor 'Matrix::Matrix(int, int, 
> DATATYPE**)':
> Input/../Common/Matrix.h:232:3: error: 'Exception' has not been declared
>Exception::Assert(nrows>0 && ncols>0, "construction of empty matrix");
>^
> Input/../Common/Matrix.h: In member function 'void 
> Matrix::Resize(int, int)':
> Input/../Common/Matrix.h:288:3: error: 'Exception' has not been declared
>Exception::Assert(nrows>0 && ncols>0, "construction of empty matrix");
>^
> ...
> 
> 
> As far as I can see Exception is declared in
>disulfind/src/Common/Exception.h
> which is also included in Matrix.h.
> 
> Any hint what might be wrong here?

Common/Exception.h uses __EXCEPTION_H as its include guard macro name (so
that it's included only once) - unfortunately  from
GCC7 uses the same guard macro name. Hence Common/Exception.h is not
actually processed in your case.

The bug is in your package, see
https://stackoverflow.com/a/228797
for a summary of the rules regarding identifiers in C++; the guard macro
name clearly intrudes on the namespace reserved for the compiler.

Change the guard macro name to something else, e.g.

#ifndef DISULFIND_COMMON_EXCEPTION_H
#define DISULFIND_COMMON_EXCEPTION_H

and then it should work again in GCC7.

Regards,
Christian



Re: How to replace throw exceptions in C++11

2017-08-25 Thread Christian Seiler

Hi,

Am 2017-08-25 10:10, schrieb Andreas Tille:

I try to fix #872262 and while I've read that dynamic exception
specifications are deprecated in C++11 I have not found a sensible
replacement for the affected code.

Any hint what to do here?


The main issue with the code are not the exception specifications but
rather that you build with -Werror here. -Werror is great for
developers (to make sure no warnings get ignored when the software is
written), but it's an awful thing to have in distribution packages.
Switch to a new compiler (as has happened here) that suddenly has
additional warnings, and builds just start to fail needlessly.

The patch in the bug report already disables -Werror - and that should
definitely be what you should use to build the package even after those
warnings have gone.

(What is possible is to build a package with specific warnings made to
errors, for example -Werror=format-security, to catch really bad bugs.
But that should be opt-in, not just a generic -Werror "please turn ALL
warnings into errors".)

As for the replacement for affected code (for the long term): C++11 has
noexcept as a replacement for throw() with empty parameter list - so if
you are _really_, _really_, _really_ sure that a function doesn't ever
throw you can specify noexcept after the function name. (If it does
throw anyway, the program will exit.) Please note though that - as with
const - noexcept is part of the function name mangling, so void foo();
and void foo() noexcept; are two separate functions when it comes to
their symbol name - so if you want to keep binary compatibility, adding
noexcept in cases where there are empty throw() specifications might
not be an option. As for exception specifications with explicitly named
exception names, for example throw(std::runtime_error) after a function
name: there's no replacement for that in C++11, just remove those
entirely if you want to be conforming. (They were useless in previous
C++ versions anyway, to my knowledge no compiler ever did anything with
those specifications.)

Regards,
Christian



Re: Linitian orig-tarball-missing-upstream-signature

2017-07-31 Thread Christian Seiler
Hi,

On 07/31/2017 11:34 AM, Andrey Rahmatullin wrote:
> On Mon, Jul 31, 2017 at 05:30:46AM -0400, Paul Wise wrote:
>>> How does this interact with git-based workflows?
>>
>> I don't use such workflows so I'm not sure, but at a guess; uscan and
>> upstream tarballs aren't involved in your workflow, so you won't have
>> upstream tarball signatures either and should manually verify the
>> signatures on git tags (and commits) instead, which I don't think
>> uscan can do yet, but I guess adding it to uscan would be feasible but
>> then the signatures would not be stored alongside the generated
>> tarball.
> uscan isn't used, or needed, in the git-only workflow at all.

In purely git workflows (that pull remote git tags), sure, but
then you'd not have debian/watch, and you wouldn't have lintian
complaining about a missing .asc file in the .changes file
because upstream doesn't actually sign any tarballs.

But in workflows that do work with upstream tarballs and use git
for Debian packaging, uscan is still useful. What I tend to do:

uscan
gbp import-orig ../package_newversion.orig.tar.gz

(gbp import-orig potentially with --upstream-vcs-tag= to keep
upstream's git history.)

And there I do want uscan to actually check the signature of
the new orig tarball it downloads. But that also means that as
I'm using the orig tarball from upstream (and pristine-tar is
just a weird way of storing it) I think it is semantically
correct to include the .asc files in the .changes file.

Regards,
Christian



Re: Linitian orig-tarball-missing-upstream-signature

2017-07-31 Thread Christian Seiler
Hi,

On 07/31/2017 10:54 AM, Paul Wise wrote:
> On Mon, Jul 31, 2017 at 4:24 AM, Ole Streicher wrote:
> 
>> is not really helpful to me; at least I did not find a mention in the
>> Debian policy that the signature should be included in the .changes
>> file. Also, it seems that the standard (pdebuild) toolchain does not
>> include it by default.
> 
> Policy documents current practice rather than describing what
> practices should be taken, so I think that we will only get this in
> policy once it is more common.
> 
> The standard toolchain here is uscan, not pdebuild, and there is a bug
> asking placing the signatures in the correct place open already, it
> just needs someone to do the work:

How does this interact with git-based workflows? Currently I use
pristine-tar (in combination with gbp) for all of the packages I
maintain. [1]

I haven't tried any recent versions, but as far as I remember
gbp doesn't store / restore .asc files the same way it does with
orig tarballs via pristine-tar. So doing something like

gbp buildpackage --git-export-dir=../some-other-dir ...

will not result in the .asc being included, even if it's present
in the parent directory of the git checkout I'm working on.

Regards,
Christian

[1] Granted, I don't really have packages that are upstream
signed, except for one where I'm also upstream myself, so I'm
asking more out of principle than practical relevance for me.



Bug#868378: RFS: nlohmann-json/2.1.1-1

2017-07-16 Thread Christian Seiler
Hi Muri,

On 07/16/2017 10:05 PM, Muri Nicanor wrote:
> On 07/16/2017 08:47 AM, Christian Seiler wrote:
>> This will likely break builds of reverse dependencies because they
>> might not find the header anymore. Did you test all of the reverse
>> dependencies of nlohmann-json in the archive that they'll find the
>> header in the new location? If some of them don't, you should file
>> bugs against those packages (ideally with patches) that the
>> maintainers know about this change. [1]
> There are two reverse dependencies atm, usbguard and mkvtoolnix. I've
> removed the build-dependency from usbguard, because it was not needed
> anymore and i've filed bug #868573 against mkvtoolnix and attached a patch.
> 
> Not sure if that qualifies as a library transition if its only one
> package, especially as mkvtoolnix doesn't FTBFS with the new
> nlohmann-json package (because it ships its own copy os the json.hpp
> file which it falls back to if the system one is not found...)

Since mkvtoolnix doesn't immediately FTBFS with the new library
package of yours and it really is just the one package, then yes,
I would agree that you probably don't need a transition slot for
just that.

Thanks for being so quick about filing the bug against the rdep!

Apart from that: even though I can't sponsor, I did a quick
review of the packaging (I did NOT check the upstream changes to
the version already in the archive); the package looks to be in
a very good shape in general. Some minor nitpicks that you might
want to fix (although those are really minor, they can wait for
a newer version):

 - (found by check-all-the-things [1]) you use "MIT" as the
   abbreviation, instead of the recommended "Expat" for that
   license

 - the package is still on debhelper compat 9 (10 is current,
   but see the debhelper docs for information on what defaults
   changed between those versions before changing d/compat)

Otherwise: it builds fine in sbuild, is lintian clean, hardening
is enabled, the packaging looks sane.

(While looking at the build log I did stumble upon another
issue, but that's in debhelper's CMake integration not keeping
up with semi-recent CMake features. I've reported that issue
in #868584, in case you're interested.)

Anyway, hope my review helps you in getting the package
sponsored sooner.

Regards,
Christian

[1] This takes a _long_ time with this package as you have huge
test data in JSON form within the package, and if you do
run it, redirect its output into a file, otherwise your
terminal will be swamped with messages.
(Also note that check-all-the-things has some false
positives in your case, e.g. it checks for correct JSON as
one of its checks, and your package has some intentionally
broken JSON as unit tests.)



Bug#868378: RFS: nlohmann-json/2.1.1-1

2017-07-16 Thread Christian Seiler
Hi there,

(not a DD, can't sponsor, but a quick comment:)

On 07/15/2017 12:05 PM, Muri Nicanor wrote:
>   * Switched build system to cmake, library is now installed in
> /usr/include/nlohmann, which is upstream default (Closes: #868112)

This will likely break builds of reverse dependencies because they
might not find the header anymore. Did you test all of the reverse
dependencies of nlohmann-json in the archive that they'll find the
header in the new location? If some of them don't, you should file
bugs against those packages (ideally with patches) that the
maintainers know about this change. [1]

Also, if the current packages can't auto-detect the new location
(i.e. they start to FTBFS with your new package), then this is
technically a library transition, so you should follow the
guidelines for those:
https://wiki.debian.org/Teams/ReleaseTeam/Transitions

Regards,
Christian

[1] List of reverse depends (since this is a header-only library):
grep-dctrl -s Package -F Build-Depends,Build-Depends-Indep \
nlohmann-json-dev /var/lib/apt/lists/*Sources
(You need sid in your sources.list and a recent apt-get update
to ensure this is up to date.)



Bug#863309: curvedns RFS

2017-06-17 Thread Christian Seiler
Hi,

Am 17. Juni 2017 12:51:17 MESZ schrieb Gianfranco Costamagna 
:
>I think it might be worth to ask on debian-mentors mail list why PIE
>flag is not
>injected anymore by debhelper...


It's not because -fPIE is the default for GCC from Stretch onwards.  This is a 
false positive from BLHC. See also:

https://bugs.debian.org/845339

Regards,
Christian



Re: build-depends on a _source_ package ?

2017-06-13 Thread Christian Seiler
Hi,

On 06/12/2017 11:05 PM, Benoît Rouits wrote:
> Is there a solution ? Should i file a bug on WNPP to ask for a
> qtcreator-dev package in order to have qtcreator source installed in
> /usr/src ?

Do you need the entire source of Qt Creator or just some header files?

In either case, you can only build-depend on binary packages, so that
you will need to ask the Qt Creator people to provide an additional
package for you.

If you only need header files, you should ask them to provide a
-dev package with the headers (and potentially .so libraries). If you
need the entire source tree, then you should ask them to provide a
-source package (see e.g. gcc-6-source, glibc-source, linux-source for
examples of this already in the archive) that puts the entire source
tree into /usr/src.

Note that you shouldn't open a bug on WNPP, but rather a bug on the
qtcreator package itself, severity wishlist, requesting this.

Hope that helps.

Regards,
Christian



Re: Bug#863108: RFS: minecraft-installer/0.1-1 [ITP] -- Unofficial way to easily install game

2017-05-22 Thread Christian Seiler
On 05/22/2017 02:14 PM, Carlos Donizete Froes wrote:
> This package contains "contrib/games" in 'd/control'.

Hmmm, then mentors doesn't show that, because it just says
"Section: games" on that page. Well, I just noticed it does show it,
but only in the URL to the dsc file that I overlooked when I saw
this message.

Sorry for the noise. :-(

Regards,
Christian



Bug#863108: RFS: minecraft-installer/0.1-1 [ITP] -- Unofficial way to easily install game

2017-05-22 Thread Christian Seiler
Hi,

Can't sponsor myself and didn't look at it in detail, but a quick comment:


Am 21. Mai 2017 22:49:54 MESZ schrieb Carlos Donizete Froes 
:
>  https://mentors.debian.net/package/minecraft-installer

The package itself is free software (I presume), but it is for downloading a 
non-free game. For this reason it should be in contrib, not main. You should 
hence change the section from 'games' to 'contrib/games'.

Regards,
Christian



Re: Bug#863108: RFS: minecraft-installer/0.1-1 [ITP] -- Unofficial way to easily install game

2017-05-22 Thread Christian Seiler
Hi,

(Resending, got the address for debian-mentors wrong. Sorry for the noise.)

Can't sponsor myself and didn't look at it in detail, but a quick comment:


Am 21. Mai 2017 22:49:54 MESZ schrieb Carlos Donizete Froes 
:
>  https://mentors.debian.net/package/minecraft-installer

The package itself is free software (I presume), but it is for downloading a 
non-free game. For this reason it should be in contrib, not main. You should 
hence change the section from 'games' to 'contrib/games'.

Regards,
Christian


Re: Bug#861754: libpll: FTBFS on non-x86: x86intrin.h: No such file or directory

2017-05-18 Thread Christian Seiler
Hi,

a small comment on the patch:

On 05/16/2017 01:28 PM, James Cowgill wrote:
>  override_dh_auto_configure:
> - ./autogen.sh
> -ifeq ($(DEB_BUILD_ARCH),i386)
> - ./autogen.sh --disable-avx --disable-sse
> - dh_auto_configure -- --disable-avx --disable-sse
> +ifneq ($(filter $(DEB_HOST_ARCH_CPU), amd64 i386),)
> + dh_auto_configure
>  else
> - ./autogen.sh --disable-avx
> - dh_auto_configure -- --disable-avx
> + dh_auto_configure -- --disable-sse --disable-avx --disable-avx2
>  endif

At first glance this appears to be wrong, as SSE2 is part of the
amd64 base ISA. However, --disable-sse actually disables SSE3
(not part of amd64 base ISA), so it's not actually wrong - you'd
probably want to add a comment to d/rules that indicates that
--disable-sse is for SSE3 though.

Also, you should add x32 to the list of archs next to amd64 and
i386 where SSE3 and higher should be disabled.

Regards,
Christian



Re: how best to package when using hardware vectorization with vector-unit specific code?

2017-05-11 Thread Christian Seiler
On 05/11/2017 09:33 AM, Kay F. Jahnke wrote:
> Or is there possibly even a ready-made solution
> just for the purpose?

Well, even if FMV doesn't work for you in your code due to the way it
is organized, you could definitely use it for dispatching the
executables.

To elaborate on that:

1) Install the actual binaries under

   /usr/lib/packagename/executable.$VARIANT

   (As others said elsewhere: don't create _all_ possible variants
   that your code supports, just create those that make the most
   sense. On amd64 that would probably be sse2 (part of base ISA),
   sse4.2 and avx2.

2) Use the following program (not tested, just as an idea) to dispatch
   to the actual programs you want to use:

__attribute__ ((target ("default")))
void run(char **argv)
{
#if defined(__amd64__)
execv("/usr/lib/packagename/executable.sse2", argv);
perror("Could not execute /usr/lib/packagename/executable.sse2");
#elif some other architecture with vector by default
execv("/usr/lib/packagename/executable.some_other_vector_isa", argv);
perror("Could not execute 
/usr/lib/packagename/executable.some_other_vector_isa");
#else
execv("/usr/lib/packagename/executable.nonvectorized", argv);
perror("Could not execute /usr/lib/packagename/executable.nonvectorized");
#endif
}

#if defined(__i386__)
__attribute__ ((target ("sse2")))
void run(char **argv)
{
execv("/usr/lib/packagename/executable.sse2", argv);
perror("Could not execute /usr/lib/packagename/executable.sse2");
}
#endif

#if defined(__amd64__)
__attribute__ ((target ("avx2")))
void run(char **argv)
{
execv("/usr/lib/packagename/executable.avx2", argv);
perror("Could not execute /usr/lib/packagename/executable.avx2");
}
#endif

#if defined(__arm__)
__attribute__ ((target ("fpu=neon-vfpv3")))
void run(char **argv)
{
execv("/usr/lib/packagename/executable.neon", argv);
perror("Could not execute /usr/lib/packagename/executable.neon");
}
#endif

int main(int, char **argv)
{
run(argv);
return 1;
}

This way you don't have to care about how to check for CPU flags,
the compiler will do it for you - and I believe the above structure
(or something very similar) is quite maintainable for the future.

(Also note that GCC 4.8 already supports this kind of FMV, the
GCC 6 addition was target_clones).

Regards,
Christian



Re: how best to package when using hardware vectorization with vector-unit specific code?

2017-05-10 Thread Christian Seiler
On 05/10/2017 11:52 AM, Wookey wrote:
> Debian requires packages to run on the base level ISA defined for each
> architecture (which does change slowly over time).

Well, kind of. What Debian requires is that if it is at all feasible
software should run on the base ISA - which in practice means that
very often the software is only compiled for the base ISA itself,
resulting in the binaries being slower than they need to be on more
modern hardware.

However, there are a couple of packages that can't easily be ported
to the base ISA (such as packages that use tons of SSE assembly,
which allows those packages to be run on some 32bit x86 CPUs), an
in this case the consensus was that it's better to have the packages
in Debian at all, even if they aren't available for all users. That
said, in those cases the basic constraints were that suitable
run-time checks need to be available to produce an error that the
user can understand (and not just fail with SIGILL), and to still
consider the fact that it doesn't run on the base ISA as a bug in the
package, just not a RC bug. (The rationale why this is not considered
RC is the following: take for example the case of SSE-assembly-heavy
code, which can't easily be made to work with x87 FPU-only systems.
If missing base ISA support in that case were to be RC, then the only
alternative would be to just drop i386 support completely, which is
a worse outcome because then nobody on i386 could use the package.)

With that all out of the way: if a package does support being
compiled for the base ISA, or a patch to make it work is trivial,
then it would be considered RC (on release archs at least) not
supporting the base ISA in the compiled package. What's not required
is to also compile optimized versions that run faster on newer
hardware - but in an ideal world one would also like to do that.
(With gcc's function multi-versioning (FMV) this has become a lot
easier noawadays though.)

Regards,
Christian



Re: Static linking question - Bug #859130 ITP: lina -- iso-compliant Forth interpreter and compiler

2017-05-02 Thread Christian Seiler
On 05/03/2017 12:41 AM, Albert van der Horst wrote:
> This message is slightly misleading. In fact the binary is not linked at all,
> nor does it need any linking.
>
> This is the build command for lina
> 
> fasm lina.fas -m256000

Well, I would argue that a compile copying everything together
into a huge assembly file and then using an assembler to create
a binary is semantically not really different from having the
linker copy object code together when statically linking. So I
would in fact argue that you're indeed statically linking here.

(And any ELF binary that doesn't import any shared object is
considered statically linked according to the ELF standard,
even if you just compiled an assembly file.)

> It seems appropriate to add an override but I've no clue how to do that.

If the compiler for Forth works that way, then yes, you should
add an override. IIRC the Go language also uses static linking
only, so there's precedent for programming languages that only
support static linking.

To add an override: if you're using debhelper for packaging,
just add a file debian/lina.lintian-overrides with the following
contents:

# Comment explaining the situation
lina: statically-linked-binary

(If you're using the autoamtic dh(1), then you're set; if you
are manually invoking dh_*, then you need to make sure that
dh_lintian is present in the right place in debian/rules. If
you're not using debhelper, just make sure that that file is
installed into /usr/share/lintian/overrides/lina in the binary
package.)

> There are more requirements in the policy that fail on such simple programs:
> Supposedly programs must be made simpler by using strip, however an attempt
> to make lina simpler make its brains fall out:
> 
> ~/PROJECT/ciforth$ strip lina
> ~/PROJECT/ciforth$ lina
> Segmentation fault
> 
> (I don't think strip should behave like this on a valid elf-executable.). 

If strip removes things that make the program fail, then I
believe you're binary is broken in some way and your ELF file
is not completely "valid". It might be good enough so that it
runs (for statically linked executables the kernel just loads
the binary and calls the entry point, so the rest of the ELF
format doesn't really matter at all, you could probably get
away with a _lot_ of invalid things in the file and it would
still run), but there's something in there that doesn't follow
the specification, or the code makes certain assumptions that
just aren't true in general.

For now it's probably fine to skip the 'strip' step (I guess
that you probably won't have debug symbols in there anyway?),
but I do believe that this really is a bug in your package, not
a bug in binutils. The severity of the bug is probably only
'normal' though.

Regards,
Christian



Re: C++ help needed for psortb

2017-04-18 Thread Christian Seiler
On 04/18/2017 11:01 PM, Andreas Tille wrote:
> x86_64-linux-gnu-gcc -g -O2 -fdebug-prefix-map=/build/psortb-3.0.4+dfsg=. 
> -fstack-protector-strong -Wformat -Werror=format-security -Wl,-z,relro  
> -shared -L/usr/local/lib -fstack-protector-strong HMM.o hmm-binding.o  -o 
> ../blib/arch/auto/Algorithm/HMM/HMM.so  \
>-lm -lpthread -lstdc++ -L/usr/local/lib -lhmmer -lsquid  \
> 
> /usr/bin/ld: 
> /usr/lib/gcc/x86_64-linux-gnu/6/../../../x86_64-linux-gnu/libhmmer.a(alphabet.o):
>  relocation R_X86_64_PC32 against symbol `Alphabet' can not be used when 
> making a shared   object; recompile with -fPIC
> /usr/bin/ld: final link failed: Bad value
> collect2: error: ld returned 1 exit status
> ...
> 
> which is probably due to the fact that I did not changed hmmer2 to
> create a shared rather than a static library and lhmmer is not compiled
> with -fPIC.  What might be the less stressful way to solve this?  I
> think the optimal solution would be to craft configure.ac and
> Makefile.am for hmmer2 (which only ships configure and Makefile.in) and
> by doing so create a shared library.  However, I do not consider this
> as a very fruitful way to spent someones time on orphaned software so
> a cheaper solution would be welcome.

Well, you could compile the static library with -fPIC anyway. Linking
a static library into a shared library is not a problem in and by
itself (the code will be copied into the shared library just like it
would be copied into an executable), the only problem here is the
missing -fPIC.

So if you shoe-horn -fPIC into the compiler flags of the static
library, linking that into a dynamic library later should work.

(That said: I'm not a huge fan of this approach, Debian prefers to
use shared libraries for a reason. OTOH, if I understand you
correctly your second pacakge is the only reverse dependency, so
it's not that big of a deal in this case.)

Regards,
Christian



Re: C++ help needed for psortb

2017-04-18 Thread Christian Seiler
Hi Andreas,

On 04/18/2017 10:15 PM, Andreas Tille wrote:
> The definition of the structure threshold_s can be found in
> /usr/include/hmmer2/structs.h (of package libhmmer2-dev) and
> looks like
> 
> struct threshold_s {
>   float  globT; /* T parameter: keep only hits > globT bits */
>   double globE; /* E parameter: keep hits < globE E-value   */
>   float  domT;  /* T parameter for individual domains   */
>   double domE;  /* E parameter for individual domains   */
> /* autosetting of cutoffs using Pfam annot: */
>   enum { CUT_NONE, CUT_GA, CUT_NC, CUT_TC } autocut;
>   int   Z;  /* nseq to base E value calculation on  */
> };

Congratulations: you've stumbled upon a corner-case that
demonstrates that C is _not_ a subset of C++. In this case,
the above structure definition does very different things in
C and C++ when it comes to defined names.

C: there are no nested structure names, the entire namespace
is flat. Any enum defined within a structure defines names
that are available in the global namespace. If you include
the above structure from a C file and compile it with a C
compiler, the name CUT_NONE will be defined directly.

C++: nested structures are a language feature, so any enum
defined within a struct (or class) will only create names
that are nested within that struct (or class). In this case,
there will be no global name CUT_NONE, but instead a name
threashold_s::CUT_NONE will exist. Unfortunately, declaring
the struct within an extern "C" { } block doesn't help
either, namespacing occurs regardless.




So: in a .c file the code you're trying to compile should
work, but you're trying to compile a .cpp file, so no dice
here. To fix the issue, just use threshold_s::CUT_NONE
instead of just CUT_NONE within C++ code.

What's beyond me is how the code you're trying to compile
came to be in its current form, as even the author would
have to have stumbled over the same problem when trying to
compile it.

Regards,
Christian



Bug#858860: RFS: arpwatch [ITA]

2017-04-06 Thread Christian Seiler
On 04/06/2017 02:51 PM, Lukas Schwaighofer wrote:
> Hi Christian,
> 
> On Thu, 6 Apr 2017 14:30:24 +0200
> Christian Seiler <christ...@iwakd.de> wrote:
>> The problem is that dirs is only interpreted by dh_installdirs, which
>> is typically run after dh_auto_install, so that wouldn't actually
>> solve your problem.
> 
> It does solve the problem (i.e. the error is gone if `usr/sbin` is
> present in the `dirs` file).  According to the Debian New Maintainers'
> Guide guide, creating directories that are not created by
> `make install DESTDIR=...` as invoked by dh_auto_install is exactly
> what the dirs file is for [1].
> 
> Also, running `dh binary --no-act` in the arpwatch packaging dir yields:
> $ dh binary --no-act
>(...)
>dh_installdirs
>dh_auto_install
>(...)
> 
> 
> Can you explain in which situations dh_installdirs will be run after
> dh_auto_install? 

Oh, ok, then I was wrong about that. I had in mind that dh binary first
runs dh_auto_install and then all of the other dh_* things required to
actually create the binary package. But if your call to dh shows
differently, then that won't happen, and I was simply wrong about that.

(TBH, I've only ever used dirs for creating empty directory that are
required by the packaged software during runtime.)

Sorry for the noise.

Regards,
Christian



Bug#858860: RFS: arpwatch [ITA]

2017-04-06 Thread Christian Seiler
On 04/05/2017 07:02 PM, Lukas Schwaighofer wrote:
> On Wed, 5 Apr 2017 18:25:04 +0200
> Hugo Lefeuvre  wrote:
>>> If I remove `usr/sbin` from dirs, buildpackage fails complaining
>>> that the directory does not exist (so something in the build system
>>> is slightly broken).  
>>
>> The error message is
>>
>> /usr/bin/install -c -m 555 -o bin -g bin
>> arpwatch /build/arpwatch-2.1a15/debian/arpwatch/usr/sbin /usr/bin/install:
>> cannot create regular file
>> '/build/arpwatch-2.1a15/debian/arpwatch/usr/sbin': No such file or
>> directory Makefile:114: recipe for target 'install' failed 
>>
>> looks like the Makefile installs files under usr/sbin, but doesn't
>> create the directory if it doesn't exist. This is rather a Makefile
>> bug.
> 
> With "build system" I meant this process of autotools creating the
> Makefile, and `make install` doing something slightly wrong.  Anyway,
> that means keeping `usr/sbin` in the dirs file is the correct "fix",
> right?

The problem is that dirs is only interpreted by dh_installdirs, which
is typically run after dh_auto_install, so that wouldn't actually
solve your problem.

You should probably just patch the build system to create the install
directory if it doesn't exist. (Maybe just use install -D to copy the
file, that will auto-create the directories leading up to the target.)

Regards,
Christian



Re: ethercodes.dat / oui.txt (Was: Re: arpwatch & systemd)

2017-03-27 Thread Christian Seiler
On 03/27/2017 12:24 AM, Lukas Schwaighofer wrote:
> 22:30:39 +0200 Christian Seiler <christ...@iwakd.de> wrote:
> 
>> On 03/26/2017 09:19 PM, Lukas Schwaighofer wrote:
>>> I'm not sure I understand what you mean… should the ethercodes.dat
>>> file be removed / used from a different package?  
>>
>> Yes. See also:
>> https://lintian.debian.org/tags/source-contains-data-from-ieee-data-oui-db.html
>>
>> ieee-data also contains a script that allows the admin to
>> update the listing manually, and other packages can hook into
>> that update process if that's required.
> 
> thanks for clarifying.
> 
> I need to convert the oui.txt database to a different format (the script
> to do that is already available). Two options come to my mind:
> 
> 1. use the maintainer scripts (postinst?) to generate the initial
>version of the converted database, add a hook for ieee-data to keep
>it updated

That seems like the most reasonable thing to do.

> 2. check if the database is up to date when the arpwatch service is
>started by the init system, update it otherwise
> 
> Option 1 seems somewhat cleaner, but if I understand the mechanisms
> correctly, this will only trigger when the admin (or a cron job) calls
> `update-ieee-data`, and not if the ieee-data package gets updated.

Well, you could also add a file-based trigger on
/usr/share/ieee-data/oui.txt.

ieee-data has two directories for the oui.txt file: the packaged
data, which is in /usr/share/ieee-data, and the most up to date
information, which is in /var/lib/ieee-data, and defaults to
symlinks to /usr/share/ieee-data.

So you should always use /var/lib/ieee-data as your data source,
but you can use /usr/share/ieee-data/oui.txt for a file-based
dpkg trigger to hook your postinst script into when ieee-data
itself is updated (but still use the /var/lib dir as the data
source.)

Combine that with the hook into update-ieee-data, and you should
be all set.

I really wouldn't do anything in the init script for this, this
just seems like a waste of reasources, plus if something goes
wrong, the admin will have a hard time debugging it, because there
is not direct temporal adjacency between the update of the
database and the problem occurring.

> The easiest way for me to check if the converted database is up-to-date
> is to depend on the existence of /var/lib/ieee-data/.lastupdate . Is
> that ok?

As far as I understand it, yes.

That said: what do you consider outdated? I've never checked how
often the OUI database changes, but as far as I can tell the
updates happen as needed, not according to a specific schedule.
That means that if nobody requested an update, the database is
not out of date, even if it's old.

What you could do is check the .lastupdate file only if a lookup
fails - and if it's older than a week, display a message. But
that would require direct patching of arpwatch, so the much
simpler solution could be to just add an entry to README.Debian
that tells the user to run update-ieee-data if they want to
have an updated database. Currently the database isn't udpated at
all, so this is already going to be an improvement.

Regards,
Christian



Re: arpwatch & systemd

2017-03-26 Thread Christian Seiler
On 03/26/2017 09:19 PM, Lukas Schwaighofer wrote:
> Hi Bastien,
> 
> On Fri, 24 Mar 2017 10:56:58 +
> Bastien Roucaries  wrote:
> 
>> Will ne also nice to repack in ordre to remove oui db
> 
> I'm not sure I understand what you mean… should the ethercodes.dat file
> be removed / used from a different package?

Yes. See also:
https://lintian.debian.org/tags/source-contains-data-from-ieee-data-oui-db.html

ieee-data also contains a script that allows the admin to
update the listing manually, and other packages can hook into
that update process if that's required.

Repacking the source seems excessive to me though, since the
database is under a DFSG-compatible license (ieee-data is in
main), but the binary package should probably just depend on
ieee-data. (Or recommend it, if it can live with the file not
being available.)

Regards,
Christian



Re: Help with systemd to start shiny-server needed

2017-03-24 Thread Christian Seiler
On 03/24/2017 04:21 PM, Andreas Tille wrote:
> I intend to package shiny-server and have prepared some preliminary
> packaging in Debian Med Git[1].  When trying to install the resulting
> package I get: [...]
> 
> Mar 24 16:20:23 sputnik systemd[1]: Starting ShinyServer...
> Mar 24 16:20:26 sputnik systemd[1]: shiny-server.service: PID file 
> /var/run/shiny-server.pid not readable (yet?) after start-post: No such file 
> or directory
> Mar 24 16:20:31 sputnik systemd[1]: Failed to start ShinyServer.

Well, your service doesn't appear to write out a PID file
(or delete it before it exits again), hence systemd can't
find that PID file and will consider the service to be
failed.

Your logs indicate that the process seems to have exited,
but whether that's because of SIGTERM sent to it or not
is unclear at this point.

First thing is you need to figure out why your service
doesn't appear to write a PID file at all. From the logs
you posted, it could also be that the service exits
prematurely.

Why are you using Type=simple + PIDFile= anyway?

sleep 3 / 5 in ExecStartPost=/ExecStopPost= also seems to
be completely wrong to me.



>From my perspective, when writing a systemd unit file for a
service, I'd like to follow the following guidelines:


 - Ideally the service supports systemd's notification
   protocol, in which case I'd use Type=notify and no PID
   file. Doesn't appear to be the case here.

 - If the service is just a program you start and it does
   not fork (so running it on the command line will have
   it be active until you press Ctrl+C or similar), then
   the best thing one can do is Type=simple, but no PID
   file, the process started by systemd is considered to be
   the main PID file of that service.

   This is not 100% ideal, as there will be no notification
   of whether the service has been started or not (systemd
   will just assume the service is up if fork+exec works),
   but without a notification protocol, that's the best
   one can do.

 - If a service forks, then I'd use Type=forking. If the
   service supports writing out a PID file [1], then I'd
   tell the service to write a PID file in /run [2], and
   tell systemd to look for it (via PIDFile=).



I don't know much about your specific code, so you'd have to
test this on the command line first (to see how the process
reacts), but you'd either just want a Type=simple OR you'd
want Type=forking with PIDFile=... But you wouldn't want to
have Type=simple with PIDFile=, that just doesn't make any
sense.

And the sleep stuff in *StartPost seems completely wrong to
me. Especially ExecStopPost: please use KillMode=control-group
or KillMode=mixed instead (see docs) if you want to ensure
that all processes of a service have exited.



Regards,
Christian


[1] Most services actually get PID file handling and forking wrong,
which is why many init system developers have developed their own
startup notification mechanisms for services that are way easier
to get right.

What should happen for forking services is (in this order):

 - process forks (possibly twice)
 
 - parent process stays alive and waits for child process to
   signal it that initialization is complete (there are various
   ways of doing this, easiest is just using a pipe(2) and
   closing it from the child when that's done)

 - child process initializes (opens all sockets, log files,
   etc.)

 -   - child process signals parent process that it's done

 - parent process writes out the PID file (with the PID of the
   child) that the child is now up and running

 - parent process exits to signal the caller that the daemon
   has now been initialized successfully

   (Alternatively you can already initialize in the parent process
   and fork only afterwards when you know you're all set.)

   Basically, from the outside, the proper interface should be:

 - process that was started exits with code 0 and a PID file
   has been written at that time, plus the PID in the PID file
   exists: service is considered started successfully

 - process that was started exits with non-zero code: failure

 - process that was started exits with code 0 but PID file
   does not exist: shouldn't happen

   I see way too many programs out there that don't get this
   completely right. Common mistakes:

 - PID file is written in child process, but possibly at a
   point after parent process has exited

 - PID file is written unconditionally after fork, even if
   initialization in child process has failed

 - parent process exits before child is properly initialized

[2] Btw. in Debian you should use /run instead of /var/run, which
nowadays is just a symlink to /run anyway. For early-boot
services this is required (because /var could be a remote file
system), but even if it's not required, unless there's a fixed
value set by upstream that you don't want to patch out for no

Re: Help to build library in generic form, avx and sse3

2017-03-14 Thread Christian Seiler
On 03/14/2017 03:46 PM, Andreas Tille wrote:
> I've started packaging Phylogenetic Likelihood Library[1].  Since it
> makes heavy use of amd64 features it comes with specific support of AVX
> and SSE3.  My plan is to provide binary packages amd64 only named
> libpll-avx1 and libpll-sse3-1 with the according features plus a generic
> library libpll-generic1 for all architectures.  Upstream supports the
> creation of separate avx and sse3 libs out of the box but I failed to
> create the generic version.  So I have two questions:
> 
>   1. Could anybody please have a look at the automake stuff to
>  enable the build of the generic lib in addition to the other
>  two.  I tried several switches but failed. :-(
> 
>   2. What do you think about the plan to support specific hardware
>  features in separate binary packages?

GCC from version 6 (which is in Debian Stretch) supports function
multi-versioning (and GCC from 4.8 onwards, which is even in Jessie,
supports a subset of that), which allows you to do the following:

 - have a function with generic C/C++ code be compiled multiple
   times in different variants, and have the most optimal variant
   be selected at runtime (requires GCC 6),

   e.g.

   __attribute__((target_clones("avx2","sse3","default")))
   double foo(double a, double b) {
 return a + b;
   }

 - manually write different versions of the function and mark
   them accordingly (requires GCC 4.8)

   __attribute__((target("default")))
   double foo(double a, double b) {
 return a + b;
   }

   __attribute__((target("sse3")))
   double foo(double a, double b) {
 SOME_FANCY_SSE3_CODE;
   }

   __attribute__((target("avx2")))
   double foo(double a, double b) {
 SOME_FANCY_AVX2_CODE;
   }

So from a purely technical perspective I think the best solution
would probably be to work with upstream to allow them to support
FMV properly - and then you only need to compile a single library
version that will work everywhere, but will select the optimal
algorithm depending on the machine it's run - win/win.

Further reading:
https://lwn.net/Articles/691932/

Regards,
Christian



Re: Packaging a gui app

2017-02-26 Thread Christian Seiler
On 02/26/2017 04:52 PM, The Wanderer wrote:
> On 2017-02-26 at 10:47, Ghislain Vaillant wrote:
> 
>> On Sun, 2017-02-26 at 10:15 -0500, matt jones wrote:
>>
>>> I am packaging a gui that has dependencies for qt and such. How do
>>> I go about ensuring that X is available as well? Do I list that as
>>> a dependency as well. The upstream maintainers don’t call it out
>>> specifically but it is understood. Links to docs are always
>>> welcome.
>>
>> Usually, the toolkit your application depends on (here Qt), will
>> bring the necessary dependencies for you. So you don't need to care
>> about X.
> 
> I recall that historically the rule was "you don't depend on having X
> packages installed" regardless, on the grounds that it is or was
> possible to connect to an X instance running on a different machine (it
> is called "the X server", after all) - but I don't spot that in current
> policy, and I do seem to remember reading discussion about repealing
> that rule on the grounds that doing this hasn't actually _worked_ in
> modern X for years if not longer.

I'm not saying it works great, and X forwarding has its problems, but
in general from my experience most programs do still work when forwarded
via X11. Heck, even Firefox works. And I know plenty of people that use
X11 forwarding in various ways (though not necessarily Firefox). Even
OpenGL stuff works, it just falls back to software rendering via Mesa in
that case.

>From my POV, packaging a GUI application is simple in general:

 - dh_shlibdeps will take care of all the dependencies via shared
   libraries (e.g. Qt) automatically, so you don't have to care about
   that directly

 - if you require certain plugins for a library you are using to be
   available, Depend: or Recommend: those packages (depending on how
   fatal their non-availability is)

 - if you need some framework such as KDE, Depend: on that (for
   example, KDE5 packages typically require a Depends: kde-runtime)

 - if you require a DBus bus to be around, Depend: dbus-x11
   (or Recommend: it if the non-availability is non-fatal)

 - if the package contains any scripts, make sure that any tools
   required from those scripts are in your dependencies (since they
   won't be auto-detected at build time)

Regards,
Christian



Re: serious bug in usbguard installation

2017-02-04 Thread Christian Seiler
On 02/04/2017 11:25 PM, Christian Seiler wrote:
> That said: I just tried this in a VM, and systemd appears to be
> quite broken if you try to start a Type=dbus unit when DBus is
> installed, but not properly configured. And while that is not
> normally the case, I couldn't get systemd to work properly again
> without rebooting it, not even a daemon-reexec worked. So I think
> this would also qualify as a systemd bug here - it should just
> fail the Type=dbus unit at this point, and not go into an endless
> loop. I'll report that separately.

Ok, you can break the endless loop by stopping dbus.socket, so
systemd actually works as expected here.

Regards,
Christian



Re: serious bug in usbguard installation

2017-02-04 Thread Christian Seiler
On 02/04/2017 10:09 PM, Muri Nicanor wrote:
> i just found a bug (#854192) in the installation procedure of usbguard:
> when i install usbguard on a minimal stretch system, the installation
> stalls and never ends successfully. apparently it has something to do
> with dbus being a dependency of usbguard. if i install dbus *before*
> installing usbugard, everything works fine. this is probably, why it
> didn't come up before. if i don't, the installations procedure stalls at
>> /var/lib/dpkg/info/usbguard.postinst configure
> 
> and the journal says
>> Feb 04 13:11:04 debian dbus-daemon[1200]: Unknown username
>> "usbguard-dbus" in message bus configuration file
>> Feb 04 13:11:04 debian dbus-daemon[1200]: Failed to start message
>> bus: Could not get UID and GID for username "messagebus"

Problem is that DBus fails to start, and systemd requires DBus to be
running (and configured properly) if Type=dbus is used.

The problem is that your package doesn't have Depends: dbus, so it
doesn't depend on the DBus daemon being available, so APT configures
dbus after usbguard (it's allowed to do that w/o an explicit Depends),
which is bad, since dbus's postinst creates the 'messagebus' user,
without which the DBus daemon doesn't start.

Fix is simple: add that dependency. :-) If you look at other DBus
services, they all have that dependency explicitly.

That said: I just tried this in a VM, and systemd appears to be
quite broken if you try to start a Type=dbus unit when DBus is
installed, but not properly configured. And while that is not
normally the case, I couldn't get systemd to work properly again
without rebooting it, not even a daemon-reexec worked. So I think
this would also qualify as a systemd bug here - it should just
fail the Type=dbus unit at this point, and not go into an endless
loop. I'll report that separately.

Regards,
Christian



Re: Gentle does not build on two architectures

2017-01-31 Thread Christian Seiler
Hi Andreas,

On 01/31/2017 09:07 AM, Andreas Tille wrote:
> while gentle 1.9+cvs20100605+dfsg1-5 has migrated to testing and #845844
> is marked as done it still affects unstable since it does not build on
> kfreebsd-amd64 and x32[1].  On both architectures it fails to build with
> 
> 
> /usr/bin/ld: SequenceTypeAAstructure.o: relocation R_X86_64_32S against 
> `.rodata' can not be used when making a shared object; recompile with -fPIC
> /usr/bin/ld: OnlineTools.o: relocation R_X86_64_32S against `.rodata.str4.4' 
> can not be used when making a shared object; recompile with -fPIC
> /usr/bin/ld: TEliteLaChromLogDialog.o: relocation R_X86_64_32S against 
> `.rodata.str4.4' can not be used when making a shared object; recompile with 
> -fPIC
> /usr/bin/ld: TRestrictionIdentifier.o: relocation R_X86_64_32S against 
> `.rodata.str4.8' can not be used when making a shared object; recompile with 
> -fPIC
> /usr/bin/ld: final link failed: Nonrepresentable section on output
> collect2: error: ld returned 1 exit status

Well, if you look at the build log:

g++ -DPACKAGE_NAME=\"\" -DPACKAGE_TARNAME=\"\" -DPACKAGE_VERSION=\"\" 
-DPACKAGE_STRING=\"\" -DPACKAGE_BUGREPORT=\"\" -DPACKAGE_URL=\"\" 
-DPACKAGE=\"GENtle\" -DVERSION=\"1.5\" -I.   -Wdate-time -D_FORTIFY_SOURCE=2 
-I/usr/lib/x86_64-kfreebsd-gnu/wx/include/gtk2-unicode-3.0 
-I/usr/include/wx-3.0 -D_FILE_OFFSET_BITS=64 -DWXUSINGDLL -D__WXGTK__  
-D__DEBIAN__ -O2 -g -Wno-write-strings -DUSE_EXTERNAL_CLUSTALW -c -o 
SequenceTypeAAstructure.o SequenceTypeAAstructure.cpp

It appears the dpkg-buildflags --get CXXFLAGS aren't passed in properly
to that compile command, because the -specs=/usr/share/dpkg/pie-compile.specs
is missing from that line.

The build log scanner agrees with me:
https://qa.debian.org/bls/packages/g/gentle.html
(Other flags are not passed as well, that's why it complains. The scanner
is linked from the package tracker btw.)

And the only reason they built on the other archs is that the upload
was done before PIE was enabled by default (and just on kfreebsd-amd64
and x32 the builds were attempted later). I suspect that the package
actually FTBFS on all archs that don't have PIE enabled in the compiler
by default now (haven't tried it though).

> Any hint what to do here?

The package's build system is actually fine: very simple autoconf +
automake with no customization that overrides flags.

So what's going on is that you have a broken debian/rules:

CXXFLAGS = "-D__DEBIAN__ -O2 -g -Wno-write-strings -DUSE_EXTERNAL_CLUSTALW"

This is just plain wrong. You should rather do:

export DEB_CXXFLAGS_MAINT_APPEND = -D__DEBIAN__ -O2 -g -Wno-write-strings 
-DUSE_EXTERNAL_CLUSTALW

dpkg-buildflags will then take care of the rest. See also:
https://wiki.debian.org/HardeningWalkthrough#How_can_I_use_additional_flags.3F

If you do that, you can then also get rid of:

override_dh_auto_configure:
CXXFLAGS=$(CXXFLAGS) dh_auto_configure

override_dh_auto_build:
$(MAKE) -k CXXFLAGS=$(CXXFLAGS) 

in your d/rules.

Hope that helps.

Regards,
Christian



Re: as upstream - Makes sense to run 'make clean' when running 'make all'?

2017-01-04 Thread Christian Seiler
On 01/04/2017 07:20 PM, Patrick Schleizer wrote:
> as upstream, does it make sense to run 'make clean' when running 'make all'?

Typically it doesn't because it breaks incremental builds, which makes
development uglier. (You have to rebuild everything everytime you call
'make'.)

For the purpose of Debian packages it doesn't matter though, as long
as you call clean _before_ building the rest.

> Would that be considered good or bad? Any convention on that?

The typical Makefile convention is:

 - Build stuff via:

 make all

   or simply

 make

   Should be idempotent, so multiple calls after another should
   work.

 - Clean the build directory:

 make clean

   Should also be idempotent, calling this in an already cleaned
   tree should change nothing.

 - Install the software:

 make install

 - Install the software but underneath a specific tree

 make install DESTDIR=/tmp/build/install

Now technically calling make clean at the beginning of the build
step doesn't have to break idempotence if you do it right - but I
would still recommend against it because it makes developing the
software much more painful. Doesn't really matter for Debian
though, because Debian builds the entire package from scratch
anyway.

Regards,
Christian



Bug#848993: RFS: llmnrd/0.2-1 [ITP]

2016-12-23 Thread Christian Seiler
Hi there,

sorry for the formatting, writing this on my phone.


Am 23. Dezember 2016 10:18:52 MEZ, schrieb Andreas Henriksson 
<andr...@fatal.se>:
>On Fri, Dec 23, 2016 at 12:12:17AM +0100, Christian Seiler wrote:
>>  - init.d: this file name works with dh_installinit, but is not
>>documented, so I'd recommend using llmnrd.init as the file name
>
>I see you're already credited by upstream so I assume you have
>already established a good relationship with your upstream.
>That's very good and very useful. Keep your upstream happy.
>Upstreams like contributions. You have a golden opportunity 
>on upstream issue #2.

I'm not sure that'll work. In contrast to systemd services init scripts are 
necessarily very distro-dependent. You can hack together something that's 
cross-distro, but that's really ugly.

Also, Debian (+ derivatives) is just about the only major distro that still 
supports traditional init scripts, except for maybe Slackware: Gentoo always 
had their own thing that wasn't compatible.

RH had /etc/sysconfig instead of /etc/default and had different includes for 
helper functions, just to give an idea what differences there are. SuSE hat yet 
another include library. RH didn't support LSB headers but had similar headers 
based on chkconfig to express dependencies.

>>  - init.d: any particular reason you don't use init-d-script? (See
>>current /etc/init.d/skeleton for how this works; it will
>>automatically source /etc/default/$scriptname and interpret the
>>DAEMON_ARGS variable, so your init script could probably be just
>>a couple of lines that set the name of the executable)
>
>I'd recommend *against* "init-d-script". It has several outstanding
>issues, is unmaintained/orphaned/unproven and AIUI that also means the
>init script becomes debian-only.

IMHO init scripts are distro-dependent anyway (see above). I didn't know about 
the issues in init-d-script and since I use that in my own packages, I'll look 
into that. Any pointers?


>>  - any reason you don't install the systemd service provided by
>>upstream in addition to the init script?
>
>Please do. Please also consider improving the systemd service
>shipped by upstream. (Another golden opportunity for upstream
>contributions.)
>Most importantly have a look at the User= directive as it seems
>like running unprivilegied is preferred (see upstream issue #4).
>See also the Restrict*= directives provided by systemd which
>would also be nice to limit the potential attack surface.

Ack.

>>  - you should probably add a line "export Q =" to debian/rules to
>>disable silent builds. While these look nicer, automated build
>>log scanners such as blhc aren't able to catch problems.
>
>debhelper today automatically disables silent rules when building
>on buildds. Using Q environment variables isn't the normal thing
>though.
>Even better than to explicitly disable silent build would be to
>hook up Q to the automatic debhelper version (V=1?).


Yeah, probably do something like

ifneq($(V),1)
Q?=@
endif

instead of just

Q?=@

in upstream's Makefile.

That said: I concur that these are all minor issues that can be fixed later and 
that d/copyright is the only blocker for an upload. And if this is to go into 
Stretch, the upload needs to happen today.

Since Andreas is willing to sponsor I'd recommend fixing that issue immediately 
and after Jan. 5th when it is in Stretch to fix the rest.

Regards,
Christian



Bug#848993: RFS: llmnrd/0.2-1 [ITP]

2016-12-22 Thread Christian Seiler
Hi,

as announced on IRC, I'm just doing a review, since I'm not a DD
and can't sponsor:

 - packaging in a VCS would be nice to have (plus the appropriate
   Vcs-Browser / Vcs-... headers in d/control)

 - debian/copyright:

 * Tobias Klauser wasn't just active in 2016, the earliest
   copyright notice of his I could find in the package is
   from 2014; so s/2016/2014-2016/ there

 * missing mention of Copyright (C) 2012 Christoph Jaeger
   for pkt.h

 * missing mention of Copyright (C) 2009-2012 Daniel
   Borkmann for util.[ch]

 - debian/compat: why only 9? compat 10 is considered stable now
   and unless you have a good reason I would recommend that any new
   package should use compat 10. (please read the debhelper manual
   though for information on what changed between 9 and 10)

 - init.d: this file name works with dh_installinit, but is not
   documented, so I'd recommend using llmnrd.init as the file name

 - init.d: any particular reason you don't use init-d-script? (See
   current /etc/init.d/skeleton for how this works; it will
   automatically source /etc/default/$scriptname and interpret the
   DAEMON_ARGS variable, so your init script could probably be just
   a couple of lines that set the name of the executable)

 - any reason you don't install the systemd service provided by
   upstream in addition to the init script?

 - debian/rules: nice and clean, I like it

 - upstream's build system does git id to get the git revision of
   the current source - but that will clash if you have the packaging
   in git (which can happen implicitly when someone checks out the
   package source via e.g. dgit)

   Minor cosmetic thing, but makes the package non-reproducible
   depending on whether you build from unpacked .dsc or from a git
   environment

 - lintian warnings:
   W: llmnrd: binary-without-manpage usr/bin/llmnr-query
   W: llmnrd: binary-without-manpage usr/sbin/llmnrd


 - you should probably add a line "export Q =" to debian/rules to
   disable silent builds. While these look nicer, automated build
   log scanners such as blhc aren't able to catch problems.

 - Building in sbuild appears to work fine.

 - Package appears to work fine (though I don't have any llmnr
   device running at the moment, so I could only test name
   resolution of my own system)

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-12-22 Thread Christian Seiler
Hi Gianfranco,

thank you very, very much for sponsoring and your proactiveness
w.r.t. the public copyright statement issue!

On 12/22/2016 03:10 PM, Gianfranco Costamagna wrote:
>>> CFLAGS_FOR_MAKEFILE=$(shell dpkg-buildflags --get CPPFLAGS) $(shell 
>>> dpkg-buildflags --get CFLAGS) -DVERSION=\"$(VERSION)\" 
>>> -DGLOBAL_CONF=\"/etc/onddirrc\"
>>>
>>> I prefer CFLAGS and then CPPFLAGS
> 
> you didn't change the order, but nevermind :)

I completely missed that part of the sentence, sorry. Any
particular reason why you prefer it that way? (To me it seems
logical the other way around, since the preprocessor is run
before the compiler. But OTOH I don't really care, so had I
not missed the sentence, I would have changed the order.)

> peter said *exactly* my opinion. Overriding flags in Makefile should be done
> only when necessary and "cum grano salis"

Well, I'll probably send a github pull request that updates the
Makefile to allow external flags to be passed in via environment
variables. If that gets merged upstream in time for another
upload before the deep freeze on Feb. 5th I'll prepare a new
upload with this fixed, otherwise this will have to wait until
the Buster release cycle.

I'd like to wait until the current version migrates to stretch
in 10-11 days before preparing any new upload though, otherwise
the package won't be part of Stretch at all.

Anyway, thanks again!

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-12-22 Thread Christian Seiler
Control: tags -1 - moreinfo

Hi Gianfranco,

I've uploaded an updated version of the package to mentors (and
also to git on alioth) that fixes these issues.

On 12/22/2016 12:29 PM, Christian Seiler wrote:
> On 12/22/2016 12:05 PM, Gianfranco Costamagna wrote:
>> 1) chmod a-x debian/ondir/usr/share/ondir/integration/*
>>
>> why no dh_fixperms override?
> 
> I forgot about dh_fixperms, will change that in the next iteration.

Done.

>> 2) 
>> CFLAGS_FOR_MAKEFILE=$(shell dpkg-buildflags --get CPPFLAGS) $(shell 
>> dpkg-buildflags --get CFLAGS) -DVERSION=\"$(VERSION)\" 
>> -DGLOBAL_CONF=\"/etc/onddirrc\"
>>
>> I prefer CFLAGS and then CPPFLAGS
>> LDFLAGS_FOR_MAKEFILE=$(shell dpkg-buildflags --get CFLAGS) $(shell 
>> dpkg-buildflags --get LDFLAGS)
>>
>> why CFLAGS in LDFLAGS?

That wasn't actually required, so I dropped it. Additionally,
after thinking about what Peter said in this thread, I removed
the -DVERSION=... from this line and rather added it to a
DEB_CPPFLAGS_MAINT_APPEND, which seems cleaner to me.

>> 3) please ask upstream about the "+" in license
> 
> I've sent upstream an email about this.

I've done so, received a quick response that v2 or later is ok,
and added a comment to d/copyright.

Updated package available under:

https://mentors.debian.net/package/ondir
https://mentors.debian.net/debian/pool/main/o/ondir/ondir_0.2.3+git0.55279f03-1.dsc
gbp clone https://anonscm.debian.org/git/collab-maint/ondir.git

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-12-22 Thread Christian Seiler
On 12/22/2016 01:18 PM, Peter Pentchev wrote:
> On Thu, Dec 22, 2016 at 12:29:23PM +0100, Christian Seiler wrote:
>> Hi Gianfranco,
>>
>> Thanks for taking care of this.
>>
>> On 12/22/2016 12:05 PM, Gianfranco Costamagna wrote:
> [snip]
>>> why override dh_auto_build and dh_auto_install?
>>> probably exporting LDFLAGS and CFLAGS should work
>>
>> No, it won't, because I have to override the variables in the
>> Makefile.
>>
>> For a simple example, take the following Makefile:
> [snip]
>>
>> If one uses cmake or autoconf or similar, then environment variables
>> are sufficient. If the Makefile uses ?= to set the environment variables,
>> then as well. But since upstream's Makefile uses a plain and = for the
>> assignment of the environment variable, we need to override that
>> explicitly via an argument to make.
> 
> That's why I always add a patch to the Makefile that changes the "=" to
> "?=" and then send it upstream; so far the upstream authors have always
> accepted such trivial yet quite useful patches :)

I'd like to get this into Stretch, and while I do believe that
upstream is likely to accept such a patch, the additional round
trip time for that (the package is slow-moving) doesn't seem
worth the tiny amount of higher elegance in d/rules right now.

But thanks for this suggestion, I'll definitely do so at the
beginning of the Buster release cycle, so once the package has
been accepted, I'll open a bug with severity wishlist for this,
so I don't forget it.

But thinking about this, I do think I can make d/rules more
readable regardless, by using DEB_CPPFLAGS_MAINT_APPEND instead
of hard-coding them into the line. Thanks for letting me think
of that. :)

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-12-22 Thread Christian Seiler
Hi Gianfranco,

Thanks for taking care of this.

On 12/22/2016 12:05 PM, Gianfranco Costamagna wrote:
> 1) chmod a-x debian/ondir/usr/share/ondir/integration/*
> 
> why no dh_fixperms override?

I forgot about dh_fixperms, will change that in the next iteration.

> 2) 
> CFLAGS_FOR_MAKEFILE=$(shell dpkg-buildflags --get CPPFLAGS) $(shell 
> dpkg-buildflags --get CFLAGS) -DVERSION=\"$(VERSION)\" 
> -DGLOBAL_CONF=\"/etc/onddirrc\"
> 
> I prefer CFLAGS and then CPPFLAGS
> LDFLAGS_FOR_MAKEFILE=$(shell dpkg-buildflags --get CFLAGS) $(shell 
> dpkg-buildflags --get LDFLAGS)
> 
> why CFLAGS in LDFLAGS?

Good question. I'll get back to you on that. I think I had a
reason for it, but I don't remember it. If there is a good
reason, I'll add a comment to d/rules, if there isn't, I'll
drop CFLAGS from there.

> why override dh_auto_build and dh_auto_install?
> probably exporting LDFLAGS and CFLAGS should work

No, it won't, because I have to override the variables in the
Makefile.

For a simple example, take the following Makefile:

CFLAGS = -O2
all:
@echo $(CFLAGS)

Then you get:

env var:
$ CFLAGS=-O0 make
-O2

argument:
$ make CFLAGS=-O0
-O0

If one uses cmake or autoconf or similar, then environment variables
are sufficient. If the Makefile uses ?= to set the environment variables,
then as well. But since upstream's Makefile uses a plain and = for the
assignment of the environment variable, we need to override that
explicitly via an argument to make.

> 3) please ask upstream about the "+" in license

I've sent upstream an email about this.

> 4) missing hardening flags
> http://debomatic-amd64.debian.net/distribution#unstable/ondir/0.2.3+git0.55279f03-1/blhc

That's a false positive: since gcc now sets PIE by default on various
architectures (amd64 included), dpkg-buildflags doesn't pass it any
more.

The binary itself is PIE:

readelf -d /usr/bin/ondir | grep PIE
 0x6ffb (FLAGS_1)Flags: NOW PIE

Compare the output of:

DEB_BUILD_MAINT_OPTIONS=hardening=+pie dpkg-buildflags --get CFLAGS

and

DEB_BUILD_MAINT_OPTIONS=hardening=-pie dpkg-buildflags --get CFLAGS

on both Jessie and Stretch/sid.



Once I hear back from upstream about GPL-2/2+, I'll get back to
you again. (With an updated package that also cleans up the other
issues.)

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-12-21 Thread Christian Seiler
Dear mentors,

I'd appreciate it if a friendly DD could have a look at this
package and sponsor it. Thanks. :)

On 11/30/2016 02:03 AM, Christian Seiler wrote:
> Package: sponsorship-requests
> Severity: wishlist
> Control: block 846237 by -1
> 
> Dear mentors,
> 
> I am looking for a sponsor for my package "ondir"
> 
>  * Package name: ondir
>Version : 0.2.3+git0.55279f03-1
>Upstream Author : Alec Thomas <a...@swapoff.org>
>  * URL : http://swapoff.org/ondir.html
>  * License : GPL-2
>Section : utils
> 
> It builds those binary packages:
> 
>   ondir - Automate tasks specific to certain directories in the shell
> 
> To access further information about this package, please visit the following 
> URL:
> 
> https://mentors.debian.net/package/ondir
> 
> 
> Alternatively, one can download the package with dget using this command:
> 
>   dget -x 
> https://mentors.debian.net/debian/pool/main/o/ondir/ondir_0.2.3+git0.55279f03-1.dsc
> 
> The package is also available via git in the debian/master branch of:
> 
> https://anonscm.debian.org/git/collab-maint/ondir.git

Regards,
Christian



Re: Source upload of r-cran-treescape does not build on any architecture - but why?

2016-12-21 Thread Christian Seiler
On 12/21/2016 12:19 PM, Christian Seiler wrote:
> On 12/21/2016 12:04 PM, Andreas Tille wrote:
>> Is bach.hen...@gmail.com the correct address for "contacting wanna-build
>> people"?  If yesm Henrik is in CC - if not what's the proper contact?
> 
> There's a mailing list for that:
> 
> https://lists.debian.org/debian-wb-team/

Also note:

https://lists.debian.org/debian-wb-team/2016/12/msg00033.html

So just be patient for a couple more hours.

Regards,
Christian



Re: Source upload of r-cran-treescape does not build on any architecture - but why?

2016-12-21 Thread Christian Seiler
Hi,
(dropping cc)

On 12/21/2016 12:04 PM, Andreas Tille wrote:
> Is bach.hen...@gmail.com the correct address for "contacting wanna-build
> people"?  If yesm Henrik is in CC - if not what's the proper contact?

There's a mailing list for that:

https://lists.debian.org/debian-wb-team/

See also the footer of the buildd webpage:

| Architecture specific issues should be sent to <$a...@buildd.debian.org>
| Service maintained by the wanna-build team 

And see also DevRef 5.10.3.3:
https://www.debian.org/doc/manuals/developers-reference/ch05.en.html#wanna-build

Regards,
Christian



Re: Source upload of r-cran-treescape does not build on any architecture - but why?

2016-12-21 Thread Christian Seiler
On 12/21/2016 11:37 AM, Andreas Tille wrote:
> I did a source upload of r-cran-treescape at 2016-12-19 21:51:35.
> 
> When looking at the build log page[1] I realise that vor some architectures
> a Build-Depends is missing but I have no idea why for instance amd64 is
> not build after > 36 hours.
> 
> Any ideas?

They have been tried a lot of times already, you can see that by looking at
"all (10)" for example (in the amd64 case).

A failed log looks like this:
https://buildd.debian.org/status/fetch.php?pkg=r-cran-treescape=amd64=1.10.18-3=1482303093

It appears to be the case that the fix for the maintscript issue with dpkg
is still affecting the buildds and r-base-core is uninstallable - in this
case probably because the chroots haven't been updated yet and dpkg comes
preinstalled in the chroots.

This doesn't count as a build failure, because the build dependencies of
the package couldn't be installed successfully, so it's not your own
packages fault. The buildds will periodically try to build the package
again until the underlying problem is fixed.

You could ask the wanna-build people to manually update the chroots to fix
this issue if you don't want to wait until the next automatic update.
(IIRC that happens twice a week.)

Regards,
Christian



Re: Possible workaround

2016-12-15 Thread Christian Seiler
On 12/15/2016 03:03 PM, Dirk Eddelbuettel wrote:
> 
> On 15 December 2016 at 14:42, Christian Seiler wrote:
> | On 12/15/2016 02:37 PM, Dirk Eddelbuettel wrote:
> | > On 15 December 2016 at 14:26, Andreas Tille wrote:
> | > | Sorry, but I have no idea how since I'm totally clueless currently and
> | > | upstream also did not yet responded to this after the initial idea that
> | > | it might be some ape related issue was not helpful.  Do you in turn see
> | > | any chance to push this question to the right forum in the R community?
> | > 
> | > Not really. All (well, most) builds at their are fine [1]. They would 
> likely suggest that
> | > we sort our (local to them) issues out at our end.
> | > 
> | > How to run R with gdb is discussed iin Writing R Extensions.  Maybe we 
> need
> | > to start with some stacktraces to see who calls whom how.
> | 
> | I had already posted a gdb backtrace here:
> | https://lists.debian.org/debian-mentors/2016/12/msg00412.html
> | 
> | Any idea how to get the corresponding R backtrace from this?
> | 
> | (R's own debug() will obviously not work if there's a C stack
> | overflow.)
> 
> Use
> 
>   R -d gdb [other options you may use]
> 
> which is describe in the manual I referenced earlier:
> https://cran.r-project.org/doc/manuals/r-release/R-exts.html#Debugging-compiled-code

Then the error doesn't occur, unfortunately.

If I run R -d gdb and then do the action manually (by calling
the corresponding R function), then everything works, even with the
lower stack limit. (I mentioned this in an earlier email.)

If I run the R command directly, and attach gdb while it's still
running (luckily it takes a couple of seconds), then the error
occurs, but I get a horrible stack trace.

I assume R -d gdb starts gdb with some initialization file - can I
load that into gdb manually? If so, where can I find that file?

Regards,
Christian



Re: Possible workaround

2016-12-15 Thread Christian Seiler
On 12/15/2016 02:37 PM, Dirk Eddelbuettel wrote:
> On 15 December 2016 at 14:26, Andreas Tille wrote:
> | Sorry, but I have no idea how since I'm totally clueless currently and
> | upstream also did not yet responded to this after the initial idea that
> | it might be some ape related issue was not helpful.  Do you in turn see
> | any chance to push this question to the right forum in the R community?
> 
> Not really. All (well, most) builds at their are fine [1]. They would likely 
> suggest that
> we sort our (local to them) issues out at our end.
> 
> How to run R with gdb is discussed iin Writing R Extensions.  Maybe we need
> to start with some stacktraces to see who calls whom how.

I had already posted a gdb backtrace here:
https://lists.debian.org/debian-mentors/2016/12/msg00412.html

Any idea how to get the corresponding R backtrace from this?

(R's own debug() will obviously not work if there's a C stack
overflow.)

Regards,
Christian



Re: Possible workaround

2016-12-14 Thread Christian Seiler
Hi,

On 12/14/2016 04:16 PM, Dirk Eddelbuettel wrote:
> One quick thought: does it die in _compilation_ which we have seen with other
> (C++-heavy) packages?

No, g++ works fine here. (The C++ file itself is trivial if you
look at it.)

Current package in Debian:
http://sources.debian.net/src/r-cran-treescape/1.10.18-2/

> Otherwise if it fails _after_ compilation we may be able to get by turning
> some default aspects of R CMD INSTALL off:
> 
>   --no-byte-compile do not byte-compile R code

That doesn't help, still fails. :-(

>   --no-test-loadskip test of loading installed package

That doesn't help either. :-(

>From the build log when it fails (8 MiB stack limit):

* installing *source* package 'treescape' ...
** package 'treescape' successfully unpacked and MD5 sums checked
** libs
g++ -I/usr/share/R/include -DNDEBUG   -I"/usr/lib/R/site-library/Rcpp/include"  
 -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-PAdLwq/r-base-3.3.2=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -g  -c CPP_update_combinations.cpp -o 
CPP_update_combinations.o
g++ -I/usr/share/R/include -DNDEBUG   -I"/usr/lib/R/site-library/Rcpp/include"  
 -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-PAdLwq/r-base-3.3.2=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -g  -c RcppExports.cpp -o RcppExports.o
g++ -shared -L/usr/lib/R/lib -Wl,-z,relro -o treescape.so 
CPP_update_combinations.o RcppExports.o -L/usr/lib/R/lib -lR
installing to 
/home/christian/r-cran-treescape-1.10.18/debian/r-cran-treescape/usr/lib/R/site-library/treescape/libs
** R
** data
*** moving datasets to lazyload DB
** inst
** preparing package for lazy loading
Creating a generic function for 'toJSON' from package 'jsonlite' in package 
'googleVis'
Warning in rgl.init(initValue, onlyNULL) :
  RGL: unable to open X11 display
Warning: 'rgl_init' failed, running with rgl.useNULL = TRUE
Error: segfault from C stack overflow
* removing 
'/home/christian/r-cran-treescape-1.10.18/debian/r-cran-treescape/usr/lib/R/site-library/treescape'

(Ignore the X warnings, they are irrelevant here, I'm too lazy to run
it in xvfb and it's in a VM without X.)

When it succeeds (195.3 MiB stack limit):

* installing *source* package 'treescape' ...
** package 'treescape' successfully unpacked and MD5 sums checked
** libs
g++ -I/usr/share/R/include -DNDEBUG   -I"/usr/lib/R/site-library/Rcpp/include"  
 -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-PAdLwq/r-base-3.3.2=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -g  -c CPP_update_combinations.cpp -o 
CPP_update_combinations.o
g++ -I/usr/share/R/include -DNDEBUG   -I"/usr/lib/R/site-library/Rcpp/include"  
 -fpic  -g -O2 -fdebug-prefix-map=/build/r-base-PAdLwq/r-base-3.3.2=. 
-fstack-protector-strong -Wformat -Werror=format-security -Wdate-time 
-D_FORTIFY_SOURCE=2 -g  -c RcppExports.cpp -o RcppExports.o
g++ -shared -L/usr/lib/R/lib -Wl,-z,relro -o treescape.so 
CPP_update_combinations.o RcppExports.o -L/usr/lib/R/lib -lR
installing to 
/home/christian/r-cran-treescape-1.10.18/debian/r-cran-treescape/usr/lib/R/site-library/treescape/libs
** R
** data
*** moving datasets to lazyload DB
** inst
** preparing package for lazy loading
Creating a generic function for 'toJSON' from package 'jsonlite' in package 
'googleVis'
Warning in rgl.init(initValue, onlyNULL) :
  RGL: unable to open X11 display
Warning: 'rgl_init' failed, running with rgl.useNULL = TRUE
** help
*** installing help indices
** building package indices
** installing vignettes
** testing if installed package can be loaded
Creating a generic function for 'toJSON' from package 'jsonlite' in package 
'googleVis'
Warning in rgl.init(initValue, onlyNULL) :
  RGL: unable to open X11 display
Warning: 'rgl_init' failed, running with rgl.useNULL = TRUE
* DONE (treescape)

So the problem occurs at the following step:

  ** preparing package for lazy loading

And, to recap the specific circumstances where this problem appears:

 - 32bit
 - Little Endian architecture
 - Linux 3.16
 - Standard stack size limit (8 MiB)
 - treescape module version >= 1.10.17

Change only one of these things and it will work:

 - 64bit Little Endian Linux 3.16, standard stack limit: works
  e.g. amd64, arm64
 - 32bit Big Endian Linux 3.16, standard stack limit: works
  e.g. powerpc
 - 32bit Little Endian Linux 4.7.x or higher, standard stack limit: works
  e.g. i386 on my own system with newer kernel, or the mipsel
  build server of Debian with a backported kernel
 - 32bit Little Endian Linux 3.16, huge stack limit: works
 - older version 1.9.18: works

Note that different kernel versions really mean just the kernel,
the libraries and tools are 100% identical. (I mean libc, R, gcc,
and so on.)

I'm at a complete loss why the kernel version is even relevant here
btw., since the program uses a huge stack, but there is no system
call related 

Re: Possible workaround

2016-12-14 Thread Christian Seiler
Hi Andreas,

On 12/14/2016 03:59 PM, Andreas Tille wrote:
> thanks a lot for your extensive analysis about of the stack problem.  I
> admit I have no idea why this large stack is needed on those
> architectures with stable kernel.  I also have no idea why everything
> went fine with treescape version 1.10.17.

For the record: 1.10.17 also failed its build on i386:

https://buildd.debian.org/status/fetch.php?pkg=r-cran-treescape=i386=1.10.17-1=1480164348

The last (and only) successful build was 1.9.17:

https://buildd.debian.org/status/fetch.php?pkg=r-cran-treescape=i386=1.9.17-1=1468346976

I just tried to rebuild 1.9.17 on i386 and that still works (in the
sense that it builds, don't know if the package actually works) - so
the problem appeared somewhere between 1.9.17 and 1.10.17.

Regards,
Christian



Possible workaround (was: Re: Help: r-cran-treescape does not build on i386, armel and armhf any more)

2016-12-14 Thread Christian Seiler
Hi again,

On 12/14/2016 03:00 PM, Christian Seiler wrote:
> If I had to guess what was going on in the backtrace, I'd suspect
> an infinite recursion in R code, which translates to infinite
> recursion of the underlying C code. But I'm really not sure here.

Interestingly enough, my initial guess was wrong.

It's not an infinite recursion, it's just a very, very deep
recursion, using a LOT of stack. If I increase the stack size
limit by to 200 MB, then the package builds successfully,
I tried that in a loop 25 times.

However, with an earlier attempt at 160 MB stack size limit,
it worked most of the time, but not always, I did get the
same error once, so the amount of stack space required does
not appear to be the same when calling the program multiple
times. (With 160 MB I tried around 15 times, and once the
160 MB limit was insufficient.)

It might even be in rare cases that the 200 MB limit is not
enough and a build could fail spuriously even with that.

> Why that only appears to occur on 32bit LE architectures with
> stable kernels (and works fine with unstable kernels on the same
> architecture, and even with the stable kernel on 64bit both LE
> and BE, as well as on 32bit BE) I also have no clue.

And this is still beyond me, because the default stack size
limit of 8 MB is more than sufficient on e.g. amd64, where
pointers are twice as large, so the amount of stack frames
that fit in that limit there is actually smaller.

So it appears you can work around this bug by manually
setting an artificially high stack size limit during the
build, but there is still an underlying problem there that
causes the stack usage to be drastically higher on
32bit LE platforms with kernel 3.16, that doesn't appear
on the same platforms with a newer kernel.

Anyway, to work around this for now, you can replace your
dh_auto_install line (that is passed to the xvfb call)
with the following command:

  /bin/sh -c "ulimit -S -s 20 ; exec dh_auto_install"

Just tried it, sbuild built the package successfully on
i386. I haven't tried armhf, but I suspect the result will
be the same.

But the underlying problem should also be fixed: a stack
size that is 25 times higher than usual is worrisome,
especially with the standard limit being plenty sufficient
on platforms with larger pointer sizes. You might have to
ask upstream and/or the R community for advice though. (Maybe
see what R function specifically does this deep recursion,
and fix that function to be a lot shallower. I don't know
how to get that information from a gdb backtrace though, as
I don't know the internals of R.)

Hope that helps.

Regards,
Christian



Re: Help: r-cran-treescape does not build on i386, armel and armhf any more

2016-12-14 Thread Christian Seiler
Hi Andreas,

On 12/14/2016 11:47 AM, Christian Seiler wrote:
> On 12/14/2016 08:50 AM, Christian Seiler wrote:
>> I'm going to try an i386 build in a VM running a stable kernel
>> and see if that does indeed change things and if I can reproduce
>> the problem. Should that not be the issue though then I really
>> can't reproduce the problem - and hence won't be able to debug
>> it... Let's see...
> 
> Indeed: in a VM with Jessie + sbuild from jessie-backports the
> build fails with a segfault:
> 
> ** preparing package for lazy loading
> Creating a generic function for 'toJSON' from package 'jsonlite' in package 
> 'googleVis'
> Error: segfault from C stack overflow
> * removing 
> '/<>/debian/r-cran-treescape/usr/lib/R/site-library/treescape'
> dh_auto_install: R CMD INSTALL -l 
> /<>/debian/r-cran-treescape/usr/lib/R/site-library --clean . 
> --built-timestamp='Wed, 14 Dec 2016 06:45:37 +0100' returned exit code 1
> 
> Now that I can reporduce this, I'll investigate more later.

Well, the stack overflow appears to be an endless loop.
I've attached a stack backtrace I obtained via gdb.

If I had to guess what was going on in the backtrace, I'd suspect
an infinite recursion in R code, which translates to infinite
recursion of the underlying C code. But I'm really not sure here.

Why that only appears to occur on 32bit LE architectures with
stable kernels (and works fine with unstable kernels on the same
architecture, and even with the stable kernel on 64bit both LE
and BE, as well as on 32bit BE) I also have no clue.

Fun fact: if you call R -d gdb, type in "run" at the gdb prompt and
then type in the following at the R prompt:

   install.packages(repos=NULL,
  
lib=".../r-cran-treescape-1.10.18/debian/r-cran-treescape/usr/lib/R/site-library",
  clean=TRUE,
  pkgs=".",
  configure.args=("--built-timestamp='Wed, 14 Dec 2016 06:45:37 +0100'")
   )

instead of running the command directly as

   R CMD INSTALL \
 -l 
.../r-cran-treescape-1.10.18/debian/r-cran-treescape/usr/lib/R/site-library \
 --clean \
 . \
 "--built-timestamp='Wed, 14 Dec 2016 06:45:37 +0100'"

this will cause the build go through successfully. However, running
the R CMD INSTALL directly (in a fresh source package directory)
will still trigger the error - and you can attach with gdb from
another console.

Also, if the source directory is not completely clean, then
sometimes stuff is left lying around in there, after which all
calls to the R CMD INSTALL will succeed.

Unfortunately I know next to nothing about R's internals so I have
no idea what to do with it. If anyone has a pointer on how to read
the backtrace or someone with more R experience can tell me what to
look out for and how to extract useful information from that, I'd be
willing to revisit this, but otherwise I'm forced to let this go,
sorry.

Regards,
Christian
#0  bcEval (body=body@entry=0xf88c4674, rho=rho@entry=0xfe83f6d4, 
useCache=useCache@entry=TRUE) at eval.c:5172
#1  0xf74353c6 in Rf_eval (e=0xf88c4674, rho=0xfe83f6d4) at eval.c:616
#2  0xf7435c6f in forcePromise (e=e@entry=0xfe83f6f0) at eval.c:515
#3  0xf7436177 in FORCE_PROMISE (keepmiss=FALSE, rho=0xfe83f7ec, 
symbol=0xf8822b38, value=0xfe83f6f0) at eval.c:4258
#4  getvar (symbol=0xf8822b38, rho=rho@entry=0xfe83f7ec, dd=dd@entry=FALSE, 
keepmiss=FALSE, vcache=0xf4def33c, sidx=2) at eval.c:4300
#5  0xf742e377 in bcEval (body=body@entry=0xf88b81ec, rho=rho@entry=0xfe83f7ec, 
useCache=useCache@entry=TRUE) at eval.c:5425
#6  0xf74353c6 in Rf_eval (e=0xf88b81ec, rho=0xfe83f7ec) at eval.c:616
#7  0xf7437201 in Rf_applyClosure (call=, op=, 
arglist=, rho=, suppliedvars=) at 
eval.c:1135
#8  0xf742fdfc in bcEval (body=body@entry=0xf88c30d0, rho=rho@entry=0xfe83f6d4, 
useCache=useCache@entry=TRUE) at eval.c:5630
#9  0xf74353c6 in Rf_eval (e=0xf88c30d0, rho=0xfe83f6d4) at eval.c:616
#10 0xf7437201 in Rf_applyClosure (call=, op=, 
arglist=, rho=, suppliedvars=) at 
eval.c:1135
#11 0xf742fdfc in bcEval (body=body@entry=0xfe8802a8, rho=rho@entry=0xfe83f5f4, 
useCache=useCache@entry=TRUE) at eval.c:5630
#12 0xf74353c6 in Rf_eval (e=0xfe8802a8, rho=0xfe83f5f4) at eval.c:616
#13 0xf7437201 in Rf_applyClosure (call=, op=, 
arglist=, rho=, suppliedvars=) at 
eval.c:1135
#14 0xf743989e in R_forceAndCall (e=, n=1, rho=) 
at eval.c:1302
#15 0xf73a1ebc in do_lapply (call=0xf889559c, op=0xf87e4500, args=0xf8895580, 
rho=0xfe8405cc) at apply.c:70
#16 0xf746883a in do_internal (call=, op=, 
args=0xf8895580, env=) at names.c:1353
#17 0xf7429a69 in bcEval (body=body@entry=0xf88931d4, rho=rho@entry=0xfe8405cc, 
useCache=useCache@entry=TRUE) at eval.c:5678
#18 0xf74353c6 in Rf_eval (e=0xf88931d4, rho=0xfe8405cc) at eval.c:616
#19 0xf7437201 in Rf_applyClosure (call=, op=, 
arglist=, rho=, suppliedvars=) at 
eval.c:1135
#20 0xf742fdfc in bcEval (body=body@entry=0xfe881280, rh

Re: Help: r-cran-treescape does not build on i386, armel and armhf any more

2016-12-14 Thread Christian Seiler
Hi Andreas,

On 12/14/2016 08:50 AM, Christian Seiler wrote:
> I'm going to try an i386 build in a VM running a stable kernel
> and see if that does indeed change things and if I can reproduce
> the problem. Should that not be the issue though then I really
> can't reproduce the problem - and hence won't be able to debug
> it... Let's see...

Indeed: in a VM with Jessie + sbuild from jessie-backports the
build fails with a segfault:

** preparing package for lazy loading
Creating a generic function for 'toJSON' from package 'jsonlite' in package 
'googleVis'
Error: segfault from C stack overflow
* removing 
'/<>/debian/r-cran-treescape/usr/lib/R/site-library/treescape'
dh_auto_install: R CMD INSTALL -l 
/<>/debian/r-cran-treescape/usr/lib/R/site-library --clean . 
--built-timestamp='Wed, 14 Dec 2016 06:45:37 +0100' returned exit code 1

Now that I can reporduce this, I'll investigate more later.

Regards,
Christian



Re: Help: r-cran-treescape does not build on i386, armel and armhf any more

2016-12-13 Thread Christian Seiler
Hi Andreas,

On 12/14/2016 08:10 AM, Andreas Tille wrote:
> On Wed, Dec 14, 2016 at 12:32:24AM +0100, Christian Seiler wrote:
>> On 11/02/2016 05:20 PM, Andreas Tille wrote:
>>
>> Hmm, was going to take a shot at debugging your segfault, but I
>> simply can't reproduce this:
>> ...
>> architectures.
> 
> Unfortunately autobuilders keep on reproducing it. :-(

:-(

> I have uploaded a package where I fixed the xvfb issue and did a source
> only upload to make sure also amd64 will be autobuilt.  While amd64 is
> fine (also regarding the xserver issue - thanks to Gregor for the hints)
> the i386 build log[1] shows the
> 
> ** inst
> ** preparing package for lazy loading
> Creating a generic function for 'toJSON' from package 'jsonlite' in package 
> 'googleVis'
> Error: segfault from C stack overflow
> * removing 
> '/«PKGBUILDDIR»/debian/r-cran-treescape/usr/lib/R/site-library/treescape'
> 
> again even if the log also has gcc-6-base i386 6.2.1-6  and binutils
> i386 2.27.51.20161212-1 - so the toolchain on autobuilder is the same as
> it worked for you.

Yeah. Hmmm. :(

>  There might be a difference between a qemu emulation
> and real hardware, thought.

But emulation is only for armhf, i386 is native on my architecture
(amd64 can run i386 directly, and the autobuilders are also amd64
machines running i386 chroots, my setup should be identical).

Funnily enough mipsel now also failed at the same point, which it
previously didn't.

The only other key difference I can see is that the failed builds
all run a stable kernel - and the working builds (also the build
previously working on powerpc) run a backports kernel (and I'm
running testing here). OTOH, the amd64 and arm64 builds are also
running on the stable kernel - but those are 64bit platforms.
Then OTOH in the ports section of the buildd logs you have 32bit
powerpc - and that is also on stable, but powerpc is big endian,
in contrast to i386, armhf and mipsel.

I'm really not sure what's going on there, but maybe there's a
failure case for 32bit little endian architectures when running
a 3.16 kernel? But that may be a complete red herring and
coincidence...

I'm going to try an i386 build in a VM running a stable kernel
and see if that does indeed change things and if I can reproduce
the problem. Should that not be the issue though then I really
can't reproduce the problem - and hence won't be able to debug
it... Let's see...

Regards,
Christian



Re: Help: r-cran-treescape does not build on i386, armel and armhf any more

2016-12-13 Thread Christian Seiler
On 11/02/2016 05:20 PM, Andreas Tille wrote:
> Warning in rgl.init(initValue, onlyNULL) :
>   RGL: unable to open X11 display
> Warning: 'rgl_init' failed, running with rgl.useNULL = TRUE
> Error: segfault from C stack overflow

Hmm, was going to take a shot at debugging your segfault, but I
simply can't reproduce this:

apt-get source --download-only r-cran-treescape

sbuild --arch=i386 -d unstable r-cran-treescape_1.10.18-1.dsc

[...]

Build Architecture: i386
Build-Space: 11748
Build-Time: 24
Distribution: unstable
Host Architecture: i386
Install-Time: 76
Job: .../r-cran-treescape_1.10.18-1.dsc
Machine Architecture: amd64
Package: r-cran-treescape
Package-Time: 107
Source-Version: 1.10.18-1
Space: 11748
Status: successful
Version: 1.10.18-1

sbuild --arch=armhf -d unstable r-cran-treescape_1.10.18-1.dsc

[ ... wait a long time due to qemu-user emulation ... ]

Build Architecture: armhf
Build-Space: 11748
Build-Time: 322
Distribution: unstable
Host Architecture: armhf
Install-Time: 331
Job: .../r-cran-treescape_1.10.18-1.dsc
Machine Architecture: amd64
Package: r-cran-treescape
Package-Time: 681
Source-Version: 1.10.18-1
Space: 11748
Status: successful
Version: 1.10.18-1

While my machine is amd64, sbuild does set a 32bit personality
(so uname -m returns i686) - same as the buildd that failed in
your case. The armhf chroot contains the qemu-arm-static binary
in /usr/bin for emulation purposes, but is otherwise pristine.
(But obviously using emulation is different than a buildd on
native hardware.)

The resulting packages contain binaries for the respective
architectures.

I can provide full build logs if you need them.

Maybe ask for a give-back at debian-wb-t...@lists.debian.org to
have the i386 and armhf buildds try the build again? As far as
I can tell the build should succeed...

Notable differences between buildd chroot and my freshly created
one (in the i386 case):

 buildd:gcc 6.2.1-5, binutils 2.27.51.20161201-1
 my system: gcc 6.2.1-6, binutils 2.27.51.20161212-1

Maybe this was a toolchain bug that was fixed recently? If so,
maybe wait a couple of days (buildd chroots are updated twice
a week IIRC) and then ask for a give-back.

Regards,
Christian



Re: Distinguishing native package / package with upstream

2016-12-10 Thread Christian Seiler
On 12/10/2016 06:03 PM, Christoph Biedl wrote:
> Then I stumbled across a package that has in its .dsc file:
> 
> | Format: 1.0
> | Source: package-name
> | (...)
> | Version: 4.3.2-1
> | (...)
> | Files:
> |  0123456789abcdef0123456789abcdef 12345 package-name_4.3.2-1.tar.gz
> 
> While the version number contains a hyphen it's certainly native.
> Additionally, the upload was quite recently (in fall 2016) so it's not a
> legacy from the old rough times.
> 
> So, in order to decide native/with upstream, do I really have to take
> a look into the .dsc file? Or is the above something that should not
> happen?

I believe that this is wrong. You should either have a native package
with a single .tar.gz (no .diff.gz or .debian.tar.gz), or a non-native
package with a .orig.tar.gz together with a .diff.gz (d/source/format
"1.0") or .debian.tar.gz (d/source/format "3.0 (quilt)").

Lintian has warnings for this btw.:

https://lintian.debian.org/tags/native-package-with-dash-version.html
https://lintian.debian.org/tags/non-native-package-with-native-version.html

OTOH, some people appear to have overridden that warning, at least one
example I checked appears to be a meta-package that shadows the version
of the package it selects... And in that case there's a good reason to
also include the Debian revision in there, which is why the override is
likely valid. (In the cases where it's not overridden it's probably a
mistake though.)

So yeah, it appears that you really have to look at the .dsc to
determine whether a package is native or not.

Regards,
Christian



Re: Bug#847650: RFS: fgetty/0.7-2

2016-12-10 Thread Christian Seiler
On 12/10/2016 11:15 AM, Christian Seiler wrote:
> On 12/10/2016 10:43 AM, Gianfranco Costamagna wrote:
>> control: owner -1 !
>> control: tags -1 moreinfo
>>
>>
>>>  * Add dietlibc-dev into Built-Using, since it is linked statically,
>>>as mandated by Policy §7.8. (Closes: #847576)
>>
>>
>> I'm not sure about hardcoding the version, this will probably break 
>> binNMUs...
> 
> Well, it's mostly a policy violation. If the non-binNMU'd version was
> already in testing, the source package of dietlibc corresponding to
> the hard-coded Built-Using header would still be around in testing,
> so the binNMU could actually migrate there.

Actually, currently 'diet-libc-dev' is hardcoded in the version on
mentors, which doesn't exist at all, so this won't migrate to testing
at all because this dependency can't be fulfilled.

dietlibc-dev would also be wrong, since 'dietlibc' is the source
package name and Built-Using requires source packages, not binary
packages.

Hmmm, I should probably write a patch for debhelper so that people
can add Built-Using-From: dietlibc-dev and dh_gencontrol automatically
replaces that by Built-Using: dietlibc (= ...). That would make life
so much easier...

Regards,
Christian



Bug#847650: RFS: fgetty/0.7-2

2016-12-10 Thread Christian Seiler
On 12/10/2016 10:43 AM, Gianfranco Costamagna wrote:
> control: owner -1 !
> control: tags -1 moreinfo
> 
> 
>>  * Add dietlibc-dev into Built-Using, since it is linked statically,
>>as mandated by Policy §7.8. (Closes: #847576)
> 
> 
> I'm not sure about hardcoding the version, this will probably break binNMUs...

Well, it's mostly a policy violation. If the non-binNMU'd version was
already in testing, the source package of dietlibc corresponding to
the hard-coded Built-Using header would still be around in testing,
so the binNMU could actually migrate there.

However, it'd be lying about the actual version of dietlibc used to
compile it, which means that'd be a policy violation, making the
binNMU rc-buggy. (Someone would have to file the RC bug manually
though.)

> what about calculating that at build time?
> https://sources.debian.net/src/virtualbox-ext-pack/5.1.10-4/debian/rules/
> this might work (see dh_gencontrol override and the control file)

As a co-maintainer of dietlibc and a maintainer of a package using it
I would recommend the following code in d/rules:

override_dh_gencontrol:
dh_gencontrol -- -VBuilt-Using="`dpkg-query -f'$${source:Package} (= 
$${source:Version})' -W dietlibc-dev`"

And in d/control:

Package: ...
Built-Using: ${Built-Using}

See e.g. the tiny-initramfs package.

(Note that the Debian revision has to be included here for the
Built-Using header to follow policy, so doing cut -d- -f1 would be
wrong here.)

Regards,
Christian



Re: Help needed for Bug#847171 soapdenovo2: Different output, still FTBFS

2016-12-08 Thread Christian Seiler
Control: retitle -1 soapdenovo2: FTBFS with parallel builds (dpkg-buildpackage 
-J$n, $n > 1)

On 12/08/2016 09:17 PM, Andreas Tille wrote:
> On Thu, Dec 08, 2016 at 02:11:07PM +0500, Andrey Rahmatullin wrote:
>> On Thu, Dec 08, 2016 at 09:58:37AM +0100, Andreas Tille wrote:
>>> it seems there are different ways how the build fails but its totally
>>> unclear to me why this happens.
>> The package just built fine in my sbuild chroot for 3 times.
> 
> I tried again and had one build success in pbuilder and in the very
> same pbuilder environment the next build ended with
> 
> ...
> make[2]: Leaving directory '/build/soapdenovo2-240+dfsg1/standardPregraph'
> make[2]: Leaving directory '/build/soapdenovo2-240+dfsg1/standardPregraph'
> standardPregraph/kmerhash.o: In function `search_kmerset2':
> ./kmerhash.c:227: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `put_kmerset2':
> ./kmerhash.c:410: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `modular':
> ./kmerhash.c:56: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `encap_kmerset2':
> ./kmerhash.c:354: undefined reference to `Kmer2int128'
> collect2: error: ld returned 1 exit status
> Makefile:74: recipe for target 'SOAPdenovo-127mer' failed
> make[1]: *** [SOAPdenovo-127mer] Error 1
> make[1]: *** Waiting for unfinished jobs
> standardPregraph/kmerhash.o: In function `search_kmerset2':
> ./kmerhash.c:227: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `put_kmerset2':
> ./kmerhash.c:410: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `modular':
> ./kmerhash.c:56: undefined reference to `Kmer2int128'
> standardPregraph/kmerhash.o: In function `encap_kmerset2':
> ./kmerhash.c:354: undefined reference to `Kmer2int128'
> collect2: error: ld returned 1 exit status
> Makefile:70: recipe for target 'SOAPdenovo-63mer' failed
> make[1]: *** [SOAPdenovo-63mer] Error 1
> make[1]: Leaving directory '/build/soapdenovo2-240+dfsg1'
> dh_auto_build: make -j4 returned exit code 2
> debian/rules:15: recipe for target 'build' failed
> make: *** [build] Error 2
> dpkg-buildpackage: error: debian/rules build gave error exit status 2
> I: copying local configuration
> E: Failed autobuilding of package
> 
> 
> I admit I'm totally clueless. :-(

Well, the problem is the following: upstream's build system doesn't
support parallel builds, see the Makefile:

all: SOAPdenovo-63mer SOAPdenovo-127mer
# [...]
SOAPdenovo-63mer:
@cd sparsePregraph;make 63mer=1;cd ..;
@cd standardPregraph;make 63mer=1;cd ..;
@$(CC) sparsePregraph/*.o standardPregraph/*.o $(LDFLAGS) $(LIBPATH) 
$(LIBS) $(EXTRA_FLAGS) -o SOAPdenovo-63mer
SOAPdenovo-127mer:
@cd sparsePregraph;make 127mer=1;cd ..;
@cd standardPregraph;make 127mer=1;cd ..;
@$(CC) sparsePregraph/*.o standardPregraph/*.o $(LDFLAGS) $(LIBPATH) 
$(LIBS) $(EXTRA_FLAGS) -o SOAPdenovo-127mer

It builds the project twice, and this has to be done in sequence
for this to work properly, otherwise both builds get in each other's
way.

However, if you look at the build log attached to the bug report,
you'll see that make is invoked with -j9:

   dh_auto_build
make -j9

That's the problem that's occurring here. Because you recently
switched to debhelper compat level 10 and that defaults to parallel
builds - see man 7 debhelper, which mentions that.

What you should do is to pass --max-parallel=1 to either dh or to
dh_auto_build in an override (either will likely work here) in
debian/rules to ensure that upstream's build system is never invoked
with parallel build options. As a more long-term goal you could try
to convince upstream to switch to a build system that supports
parallel building (i.e. write their Makefiles in a more make-like
manner and less in a shell script-like manner), because that will
save time during builds.

Hope that helps.

Regards,
Christian



Re: usbguard soname stability

2016-12-08 Thread Christian Seiler
On 12/08/2016 02:34 PM, Muri Nicanor wrote:
> the usbguard source package ships a shared library libusbguard0. i asked
> upstream about bumping the soname when the interface changes, but
> upstream considers usbguard 0.x as not stable yet and will start
> maintaining soname version beginninig with 1.x (which is understandable).

I know that this won't necessarily convince upstream, but the SONAME
version and the project version are two very distinct things, and
they should _not_ be changed in tandem - they should be completely
independent.

A SONAME only tracks whether the library is still binary-compatible
with programs compiled against the same SONAME. (But there is only
backwards compatibility guaranteed here, not forwards compatibility.)

For example, on my system I have in /usr/lib/ a library
libzip.so.4 (part of the libzip4 package), but the package version
is only 1.2; this just means that there have been more incompatible
changes in the library that there have been major versions.

Conversely there's libpulse.so.0 (from the package libpulse0) that
is already at version 9 - this just means that there never has been
an incompatible change in that library. (Or at least none that was
so severe that people felt like increasing the SONAME.)

So if upstream refuses to bump the SONAME even with incompatible
changes, they're doing Linux shared objects wrong. A SONAME of
.so.250 doesn't mean the software is mature (it probably means
the exact opposite, because stuff is changed so often ;-)), and
the software's actual version can still easily be 0.5 at that
point.

Now I realize that you won't necessarily be able to convince
upstream of that - unfortunately. And if you can't, I would
recommend going the route in installing the library into a sub-
directory of /usr/lib/ + setting the RPATH, so that
it's clear that this is an internal library. (Assuming that there
are no rdeps of the library in Debian. If there are rdeps in
Debian, then I don't have any good advice, because then it's a
publicly used library and really _should_ do proper SONAME
handling of it.)

> and related: because upstream does not consider the project stable yet,
> i'd file an rc bug (in the full freeze time) to prevent the package from
> transitioning to stable. or is there an better/alternative way?

Define "stable". Unusable? Insecure? If so, you should keep it out
of Stretch. But just "interfaces may change quite a bit in the
future" is not a reason to keep it out of stretch if you believe
that the software is in good shape and supportable for as long as
Stretch is supported.

Basically: if you believe either you or the security team can
backport security fixes without too many difficulties during the
lifetime of Stretch, the software should go in. If you think that
this is going to be too difficult, file a RC bug to keep it out
for now (and revisit for Buster) - and if you're unsure, talk
with upstream and the Debian security team before making a
decision. Note that you can always ask the release team for
manual removal from testing later, so as long as you don't drop
the ball on this and take until the release itself to figure this
out, I wouldn't file an RC bug right now, but first try to
determine if you think it should be in Stretch or not - and then
make an informed decision.

Hope that helps.

Regards,
Christian



Re: Please help with symlink_to_dir expression (Was: Bug#847234: r-cran-rcurl: directory vs. symlink conflict: /usr/lib/R/site-library/RCurl/examples)

2016-12-06 Thread Christian Seiler
On 12/06/2016 11:45 PM, James Cowgill wrote:
> Hi,
> 
> On 06/12/16 22:34, Christian Seiler wrote:
>> On 12/06/2016 11:22 PM, James Cowgill wrote:
>>> The version number should be the version number immediately before the
>>> one where the dpkg-maintscript stuff is added, not when the symlink was
>>> converted to a directory.
>>>
>>> In this case you probably want to use "1.95-4.8-2~" (if the bug is fixed
>>> in 1.95-4.8-2).
>>
>> I wouldn't use that version if you ever want to backport that specific
>> version of the package, it's better to specify the previous Debian
>> version directly, in this case 1.95-4.8-1.
> 
> There is actually a section in dpkg-maintscript-helper(1) about why this
> is a bad idea (it breaks local builds or anyone else who manually
> patched your package).
> 
> Note that 1.95-4.8-2~ sorts before 1.95-4.8-2~deb8+1 anyway so there is
> no issue with backports here.

Yeah, you're right. Sorry about the confusion on my part.

Regards,
Christian



Re: Please help with symlink_to_dir expression (Was: Bug#847234: r-cran-rcurl: directory vs. symlink conflict: /usr/lib/R/site-library/RCurl/examples)

2016-12-06 Thread Christian Seiler
On 12/06/2016 11:22 PM, James Cowgill wrote:
> Hi,
> 
> On 06/12/16 21:36, Andreas Tille wrote:
>> On Tue, Dec 06, 2016 at 07:06:39PM +0100, Andreas Beckmann wrote:
>>> Package: r-cran-rcurl
>>> Version: 1.95-4.8-1
>>> Severity: serious
>>> User: debian...@lists.debian.org
>>> Usertags: piuparts
>>>
>>> ...
>>> >From the attached log (usually somewhere in the middle...):
>>>
>>> 2m19.9s INFO: dirname part contains a symlink:
>>>   /usr/lib/R/site-library/RCurl/examples/CIS (r-cran-rcurl) != 
>>> /usr/share/doc/r-cran-rcurl/examples/CIS (?)
>>> /usr/lib/R/site-library/RCurl/examples -> 
>>> ../../../../share/doc/r-cran-rcurl/examples
>>>   /usr/lib/R/site-library/RCurl/examples/CIS/cis.R (r-cran-rcurl) != 
>>> /usr/share/doc/r-cran-rcurl/examples/CIS/cis.R (?)
>>> /usr/lib/R/site-library/RCurl/examples -> 
>>> ../../../../share/doc/r-cran-rcurl/examples
>>> ...
>>
>> I tried to fix this the following way.  In the Jessie package
>> r-cran-rcurl_1.95-4.3-1+deb8u1_amd64.deb the examples link is:
>>
>>
>> $ readlink /usr/lib/R/site-library/RCurl/examples 
>> ../../../../share/doc/r-cran-rcurl/examples
>>
>>
>> Since in the package in unstable examples is a directory I tried
>> to fix the upgrade path by
>>
>>
>> $ cat debian/maintscript
>> symlink_to_dir /usr/lib/R/site-library/RCurl/examples 
>> ../../../../share/doc/r-cran-rcurl/examples 1.95-4.3-1
> 
> The version number should be the version number immediately before the
> one where the dpkg-maintscript stuff is added, not when the symlink was
> converted to a directory.
> 
> In this case you probably want to use "1.95-4.8-2~" (if the bug is fixed
> in 1.95-4.8-2).

I wouldn't use that version if you ever want to backport that specific
version of the package, it's better to specify the previous Debian
version directly, in this case 1.95-4.8-1.

Regards,
Christian



Re: Problems with openssl when upgrading r-bioc-rtracklayer

2016-12-04 Thread Christian Seiler
Hi,

On 12/04/2016 08:55 PM, Andreas Tille wrote:
> I tried to upgrade r-bioc-rtracklayer[1] to the latest upstream version
> (see trunk in SVN) but the build failed with:
> 
> * installing *source* package 'rtracklayer' ...
> ./configure: line 1676: syntax error near unexpected token `OPENSSL,'
> ./configure: line 1676: `PKG_CHECK_MODULES(OPENSSL, openssl >= 1.0, 
> OPENSSL="yes", OPENSSL="no")'

The m4 macro PKG_CHECK_MODULES doesn't appear to be replaced
when generating configure from configure.ac. Since you are
using dh and compat = 10, dh_autoreconf is used by default,
so that's where the macro isn't properly substituted.

If you're running this in a clean environment, it likely
comes from the fact that you don't have pkg-config installed,
which installs /usr/share/aclocal/pkg.m4, where the macro is
defined (and which has to be available at the time autoreconf
is run).

So my guess is you simply need to add pkg-config to your
Build-Depends and everything should work.

I've never used svn-buildpackage, and I don't understand how
to properly checkout what you have and build it, so I can't
really verify that this is the right solution.

Regards,
Christian



Bug#846306: RFS: ondir/0.2.3+git0.55279f03-1 [ITP]

2016-11-29 Thread Christian Seiler
Package: sponsorship-requests
Severity: wishlist
Control: block 846237 by -1

Dear mentors,

I am looking for a sponsor for my package "ondir"

 * Package name: ondir
   Version : 0.2.3+git0.55279f03-1
   Upstream Author : Alec Thomas 
 * URL : http://swapoff.org/ondir.html
 * License : GPL-2
   Section : utils

It builds those binary packages:

  ondir - Automate tasks specific to certain directories in the shell

To access further information about this package, please visit the following 
URL:

https://mentors.debian.net/package/ondir


Alternatively, one can download the package with dget using this command:

  dget -x 
https://mentors.debian.net/debian/pool/main/o/ondir/ondir_0.2.3+git0.55279f03-1.dsc

The package is also available via git in the debian/master branch of:

https://anonscm.debian.org/git/collab-maint/ondir.git

Regards,
Christian



Re: Writing outside of build dir

2016-11-26 Thread Christian Seiler
On 11/26/2016 02:31 PM, Ross Vandegrift wrote:
> On Sat, Nov 26, 2016 at 02:30:59AM +0100, Christian Seiler wrote:
>>> 2) Is there a common pattern for handling upstream tests that break this
>>> rule?  Maybe there's an alternative to disabling them?
>>
>> If upstream tests do that, I would suggest sending a patch
>> upstream that fixes them, because especially for tests I
>> would consider this a bug.
>>
>> That said, if tests just require stuff in the home directory 
>> you could set the HOME environment variable to a temporary
>> directory within the build tree before you run the tests, to
>> work around this kind of problem. Nevertheless I would consider
>> those tests buggy and would want to patch them.
>>
>> If you could give a couple of examples of what exactly you're
>> thinking of, maybe my answer could be more specific.
> 
> A library service creates local sockets.  The library provides a
> fallback mechanism for the socket location - first try $XDG_RUNTIME_DIR,
> second try $HOME, finally use $TMPDIR.  Most of the tests unset the
> first two and go straight to TMPDIR.  But to test the fallback mechanism
> itself, two tests do not.
> 
> As a workaround, I disabled these.  But it was suggested to instead set
> HOME=/tmp, XDG_RUNTIME_DIR=/tmp.  Seems clever, but I wasn't sure if
> this was permitted.

Well, you could also do the following before running the tests (as a
bash script; how you integrate that is up to you):

cleanup() {
  [ -n "$temporary_HOME" ] && rm -r "$temporary_HOME"
  [ -n "$temporary_XDG_RUNTIME_DIR" ] && rm -r "$temporary_XDG_RUNTIME_DIR"
}

trap cleanup EXIT
temporary_HOME="$(mktemp -d)"
temporary_XDG_RUNTIME_DIR="$(mktemp -d)"

HOME="$temporary_HOME" XDG_RUNTIME_DIR="$temporary_XDG_RUNTIME_DIR" ./run_tests

That way you'd not be using /tmp directly (bad idea to pollute that
directly), and you'd have two different directories, to be sure that
the fallback actually works.

Also, if setting HOME is not enough (because the software reads the
home directory directly from the NSS database, e.g. /etc/passwd), then
you could use nss_wrapper for that, see https://cwrap.org/nss_wrapper.html
That was specifically designed for tests to provide a different
environment. In general CWrap is very nice for tests that integrate into
the system a bit deeper: https://cwrap.org/

Finally, some tests you may not want to execute during build time.
There are also runtime tests in Debian, called autopkgtests, and there
is automated infrastructure in place to run them regularly. Debian's
infrastructure uses LXC to isolate these tests, so in those tests you
can in fact write anywhere you want if that really is required (as
long as you declare things such as the proper isolation level and
possibly breaks-testbed). See also:

https://ci.debian.net/doc/

I would in fact recommend using some kind of autopkgtest in general,
even if you can run the unit test suite during build time - since the
autopkgtests are more related to integration testing instead of pure
functionality. (You would run different tests, obviously.)

For example, a web server package could contain unit tests that would
start the web server on localhost on a random port during build time
to see if it responds correctly to requests, whereas an autopkgtest
for the same package would test for example whether the webserver is
started properly after package installation and listens on the correct
port. The autopkgtest would have the proper isolation level specified
(isolation-container in this case) to make sure that this does not
interfere with the system the test is run on.

Regards,
Christian



Re: Writing outside of build dir

2016-11-25 Thread Christian Seiler
On 11/26/2016 01:59 AM, Ross Vandegrift wrote:
> On 11/11/2016 0826:45 AM, Christian Seiler wrote:
>> pbuilder sets the home directory of the pbuilder user to /nonexistent
>> to make sure that builds don't modify files in the home directory,
>> which is forbidden by Debian Policy (for good reason builds are not
>> supposed to change things outside the build directory).
> 
> Could you point me to this policy?  I'd like to learn more, but haven't
> been able to find it.

I just checked and it really isn't in there. OTOH, there have been
bug reports with severity serious about this issue since forever;
a quick search randomly gave me the following reports within 1
minute, and then I stopped looking:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=415367
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=469903

My guess is that it probably never got added to policy because
autobuilders fail with similar errors as pbuilder does, so
packages that violate this FTBFS, which is an automatic RC bug,
regardless of policy. And probably also because most people
find this principle so obvious.

Btw. notable exception is /tmp (or $TMPDIR, if that's set), as
long as there is cleanup afterwards. Many compilers tend to use
/tmp for temporary files. (gcc does by default unless you
specify -pipe.)

> Two probably naive questions:
> 
> 1) Why? (I can imagine reasons, but don't want to assume that I know)

Well, because of side effects. Suppose you want to fix a bug in
a package you're using, and there is some patch already available
online, and you just want to rebuild the package with that patch
included, but you don't really understand all details of the
entire build system etc. of the package (even though you might
understand the patch itself) - what if the package then just
modifies configuration in your home directory? Or installs stuff
there? What if your home directory has limited space available
and you're building on a different partition - but then suddenly
the partition with your home directory is full just because of a
package build that has side effects.

There are tons of reasons why builds should be self-contained,
and I think this is something that should IMHO be very obvious.

Also, this is a good practice irrespective of Debian, upstream
packages should do the same in principle. (And most do.) Now in
the case of build systems that have a feature to automatically
download dependencies this is a bit more complicated (because
the user might want that feature), so I understand why upstream
might deviate from that principle in this specific case. [1]
But in general at least I see no reason for any build system to
not be self-contained within the source directory of the package.
(Technically, if the system supports out of tree builds, as
many traditional systems such as automake or cmake do, it should
even be contained in the build directory only and not modify the
source directory at all.)

> 2) Is there a common pattern for handling upstream tests that break this
> rule?  Maybe there's an alternative to disabling them?

If upstream tests do that, I would suggest sending a patch
upstream that fixes them, because especially for tests I
would consider this a bug.

That said, if tests just require stuff in the home directory 
you could set the HOME environment variable to a temporary
directory within the build tree before you run the tests, to
work around this kind of problem. Nevertheless I would consider
those tests buggy and would want to patch them.

If you could give a couple of examples of what exactly you're
thinking of, maybe my answer could be more specific.

Regards,
Christian

[1] But even there I dislike this. I don't think running a build
should install stuff. I could get behind a build system having a
separate command line command for downloading the dependencies
automatically that the user could explicitly call if required,
and maybe a combined command for doing both at the same time,
but I do think that a command to "just build, and fail if deps
not available" should be easily available in any sane build
system, for various reasons. (The famous "dissident test", but
even more trivial things such as being behind a metered
connection.)



Bug#835274: dh-text no longer needed

2016-11-17 Thread Christian Seiler
Control: tags -1 - moreinfo

On 11/18/2016 08:34 AM, Dmitry Bogatov wrote:
> 
> [2016-11-16 13:09] Christian Seiler <christ...@iwakd.de>
>> Am 16. November 2016 10:28:41 MEZ, schrieb Dmitry Bogatov <kact...@gnu.org>:
>>>  * Drop diet libc build due issues with errno
>>
>> As a current co-maintainer of dietlibc in Debian, could you elaborate
>> here? I've spent the last couple of months fixing all sorts of bugs in
>> there (and impro ving packaging, for example dietlibc-dev is now M-A:
>> same), and if problems remain, I'd also like to see them fixed.
>>
>> Of course, you are free to use/drop usage of dietlibc for whatever
>> reason, and maybe there are others than just a specific bug. But
>> irrespective of whether y ou keep building against it, I'd like to fix
>> potential bugs.
> 
> Hm, seems things changed since last time I tried to build with diet. But
> still, it does not work:
> 
>   ./load bcrontab bcron.a -lbg-cli -lbg
>   ./load bcron-start bcron.a -lbg
>   /usr/lib/bglibs/libbg.a(connectu.o): In function `socket_connectu':
>   (.text+0x2f): undefined reference to `__strcpy_chk'
>   /usr/lib/bglibs/libbg.a(mktemp.o): In function `path_mktemp':
>   (.text+0xd7): undefined reference to `__lxstat'
>   collect2: error: ld returned 1 exit status
>   Makefile:66: recipe for target 'bcrontab' failed

Ah, you're trying to link code compiled against glibc headers (in
this case bglibs) with dietlibc, and that won't work. If you want
to use dietlibc in conjunction with additional libraries, all
additional libraries need to be compiled for dietlibc as well.

You can easily do that additionally, by compiling the libraries
twice: installing the libraries in /usr/lib/ directly
when compiling against glibc, and installing them in
$(diet -L ${CC:-gcc)) [1] when compiling against dietlibc.

Looking at the changelog of bglibs, they haven't compiled against
dietlibc since 2008, so unless you change bglibs, this can't
work.

I'm hence removing the moreinfo tag, because your course of action
to remove dietlibc-dev from Build-Depends of your package is the
right one. If at some point in the future bglibs are recompiled
against dietlibc (in addition to glibc preferably) it might make
sense to re-add the dietlibc-dev Build-Depends here, but for now
you can't do anything about that.

I would only ask you to alter the changelog to reflect the actual
reason why you don't use dietlibc anymore; that is use an entry
like:

 * Drop dietlibc-dev from Build-Depends for now. (Libraries
   depended upon are not compiled for dietlibc anymore.)

That makes it clear what is really going on.

Thanks!

Regards,
Christian

[1] This is Multi-Arch-aware in the current Debian package for
dietlibc. For example, on x86_64 it will expand to
/usr/lib/x86_64-linux-gnu/diet/lib-x86_64
Headers need to go into /usr/include/diet, and need to be
the same across all architectures if the package containing
them is to be M-A: same. I'll probably need to change that
part in the future.



Bug#835274: dh-text no longer needed

2016-11-16 Thread Christian Seiler
Hi there,

Am 16. November 2016 10:28:41 MEZ, schrieb Dmitry Bogatov :
>  * Drop diet libc build due issues with errno

As a current co-maintainer of dietlibc in Debian, could you elaborate here? 
I've spent the last couple of months fixing all sorts of bugs in there (and 
improving packaging, for example dietlibc-dev is now M-A: same), and if 
problems remain, I'd also like to see them fixed.

Of course, you are free to use/drop usage of dietlibc for whatever reason, and 
maybe there are others than just a specific bug. But irrespective of whether 
you keep building against it, I'd like to fix potential bugs.

Thanks,
Christian



Re: Scala 2.10

2016-11-10 Thread Christian Seiler
On 11/11/2016 03:08 AM, Marko Dimjašević wrote:
> # Adding debian-mentors
>> /build/scala-2.10.5/build.xml:218: Directory /nonexistent/.m2/repository
>> creation was not successful for an unknown reason

pbuilder sets the home directory of the pbuilder user to /nonexistent
to make sure that builds don't modify files in the home directory,
which is forbidden by Debian Policy (for good reason builds are not
supposed to change things outside the build directory). In your case
the build process does try create the directory $HOME/.m2/repository,
and that fails because of this. So in this case pbuilder caught the
problem very early.

I'm not in the Java packaging community, but from a little searching
.m2 appears to be created by the Maven build system, and I know for
sure there is software packaged in Debian that uses Maven, so maybe
you'd want to take a look at those packages how they do their build.

Hope that helps.

Regards,
Christian



Re: Building package under kfreebsd/hurd

2016-11-07 Thread Christian Seiler
On 11/07/2016 12:46 AM, Elías Alejandro wrote:
> I wonder if there's a way to build packages for distinct
> architectures, specifically for
> Hurd or Kfreebsd. Do I have to create a new installation or use qemu?.

In my experience the easiest way to do so is to use a virtual
machine (I prefer libvirt + virt-manager with Qemu for that),
boot the machine, install an SSH server and 

In the case of Hurd, you really don't want to use that on your
bare-metal hardware, because last time I checked it didn't
support USB yet. (If you don't need USB you can of course use
it. ;-)) kFreeBSD is not a problem in that regard, but unless
your system is really RAM-starved a VM is still much easier to
handle.

Note that it's not completely trivial to set up these machines.
The problem is that most installation media you can find are
a bit older, and if you've ever tried to install testing/sid
with an older installer, you can see that it often doesn't
quite work because sid will have moved on quite a bit. Plus
a lot of the documentation you find is a bit outdated for
both archs - there is more current documentation, but when
searching you more often than not find the outdated docs in
my experience, before you find the current ones.

In the case of Hurd Samuel Thibault provides premade images
you can use:
https://people.debian.org/~sthibault/hurd-i386/README
https://people.debian.org/~sthibault/hurd-i386/
I suspect that's going to be the easiest way of setting up a
VM there. (Please do a dist-upgraded before you actually use
them to try stuff though, they are relatively up to date, but
aren't daily images.)

In the case of kFreeBSD, I'm not completely sure anymore,
but if I remember correctly, I used the Jessie rc3 installer
to install the VM and then dist-upgraded to sid (by changing
the sources.list):
http://cdimage.debian.org/mirror/cdimage/archive/jessie_di_rc3/kfreebsd-amd64/iso-cd/
(That may or may not work, depending on whether I remember
correctly.)

In both cases (Hurd, kFreeBSD) please be aware that while a
lot of the everyday userland is still the same as with the
Linux ports (e.g. ls, cp, etc.), many administrative commands
are quite different or at least have different options / a
different output. Especially Hurd can be quite weird when you
first come in contact with it; once you get to know some of
the concepts and ideas behind it, it's actually really cool,
but there's a bit of a learning curve there.

Hope that helps.

> [1]https://wiki.debian.org/qemubuilder

I haven't tried that yet, but from reading the wiki page it
looks to me that it's mostly a Linux thing - and while there
is no inherent reason why fully-fledged VMs with Hurd or
kFreeBSD wouldn't work in principle with something like that,
I suspect that you'd need to fix a lot of things to make it
work. (I may be wrong though.) It's probably easier to just
use a virtual machine manually yourself.

Regards,
Christian



Re: Data updates in debian packages

2016-10-31 Thread Christian Seiler
On 10/31/2016 10:30 AM, Ole Streicher wrote:
>  debian/triggers --
> interest /usr/share/zoneinfo/leap-seconds.list
> ---8<--
> 
> However, I now get the following error when I try to update tzdata:
> 
> dpkg: cycle found while processing triggers:
>  chain of packages whose triggers are or may be responsible:
>   casacore-data-tai-utc -> casacore-data-tai-utc
>  packages' pending triggers which are or may be unresolvable:
>   casacore-data-tai-utc: /usr/share/zoneinfo/leap-seconds.list
> dpkg: error processing package casacore-data-tai-utc (--configure):
>  triggers looping, abandoned
> Errors were encountered while processing:
>  casacore-data-tai-utc
> 
> What is my mistake here?

Well, if your package Depends: on tzdata, then you created a cycle:
tzdata wants to trigger your package, but your package depends on
tzdata.

What you'll want to do is

interest-noawait ...

instead of

interest ...

A detailed explanation is man 5 deb-triggers together with
/usr/share/doc/dpkg-dev/triggers.txt.gz, but it's not easy to grok.
However, the recommendation in man 5 deb-triggers is something you
should follow, i.e. use -noawait triggers unless you really need
-await triggers for some reason.

Regards,
Christian



Re: Data updates in debian packages

2016-10-31 Thread Christian Seiler
On 10/31/2016 09:07 AM, Ole Streicher wrote:
> Russ Allbery  writes:
>> The required timeliness depends a lot on what you're using leap seconds
>> for, and in particular if you need to know about them far in advance, or
>> if it's only necessary to have an updated table before the leap second
>> itself arrives.
> 
> We need it to put correct time on astronomical registrations, so it is
> most important to have them once they are effective. Having them in
> advance would be an additional plus, however, since f.e. a computer may
> be disconnected during/after the observation, if that happens on a place
> without internet connection.

Data might help here, so I've looked at the past 3 leap seconds that
were introduced (I don't think it makes sense to go further back,
because the one before that was 2009, and that's probably too long
ago to draw conclusions):

Leap second | Jun 2012 | Jun 2015 | Dec 2016
+--+--+-
IERS ann.   |   2012-01-05 |   2015-01-05 |   2016-07-06
tzdata rel. | 2012a 2012-03-01 | 2015a 2015-01-29 | 2016g 2016-09-13
sid | 2012b 2012-03-06 | 2015a 2015-01-31 | 2016g 2016-09-28
stable  | 2016c 2012-05-05 | 2015a 2015-02-01 | 2016g 2016-10-03
stable PR   |   2012-05-12 |   2015-09-05 |   not yet
|  |  (now oldstable) | 
oldstable   | (Lenny EOL)  | 2015c 2015-04-17 | 2016h 2016-10-26

"stable" means stable/updates (former volatile), "stable PR" means
the stable point release that gathered up the all stable/updates,
stable-security and stable/proposed-updates and "oldstable" means
squeeze-lts and wheezy-security. (In both cases they were already LTS,
no leap second in the last 6 years has fallen into a window where we
had oldstable not being LTS.)

Note that the "stable PR" metric just shows you that you don't want
to run a system that needs up to date leap seconds data without
having stable/updates enabled, just because point releases are too
infrequent. (But that would apply to a new package tracking just
the leap seconds data from IERS as well.)

What this does say is that stable/updates and oldstable (LTS) had
updated leap seconds information slightly less than 3 months before
the leap second, in some cases even a bit earlier. If we are going
to assume that in a perfect storm this might be a bit worse, then I
think one can say that roughly 2 months in advance of a leap second
any officially supported Debian version will have updated an tzdata
package. (If you enable the proper repositories.)

(Btw. leap-seconds.list was only introduced upstream in 2013, and
packaged in the binary package in 2015; before that only the binary
rules files for each time zone contained the leap second info. See
. However, since this is used by
DSA, this is going to be kept around.)

Hope this information helps in you evaluating this.

Regards,
Christian



Re: Non-NEW backports rejected with "ACL dm: NEW uploads are not allowed"

2016-10-30 Thread Christian Seiler
On 10/30/2016 11:11 AM, Mattia Rizzolo wrote:
> On Sun, Oct 30, 2016 at 12:04:16AM -0400, Peter Colberg wrote:
>> Older versions of the packages already exist in jessie-backports. My
>> key has been added to the backports ACL (and has worked for similar
>> updates in the past), and I have DM upload rights for the packages.
>>
>> My uploads for both packages were rejected with "ACL dm: NEW uploads
>> are not allowed". I tried two times each, on October 16 and today, to
>> rule out temporary errors. Do you have any idea what I am missing?
>>
>> I attached the (unsigned) .changes files for the attempted uploads.
> 
> from the first .changes:
> 
> | Distribution: jessie
> 
> that's wrong, you are trying to upload to the stable distribution,
> instead of jessie-backports.

I would recommend to use dput from dput-ng instead of plain old
dput, because that will catch the discrepancy between the distribution
in the changelog and the changes file. It prints a nice error message
explaining the problem, before the package is even uploaded.
(I accidentally stumbled over that yesterday, due to sbuild -d sid
but changelog having unstable in it.)

Regards,
Christian



Re: Data updates in debian packages

2016-10-30 Thread Christian Seiler
On 10/30/2016 10:20 AM, Ole Streicher wrote:
> IETF is responsible for internet standards, not for leap seconds. They
> will take the leap seconds from IERS. I would assume that this
> connection is well-established to rely on it. I was not so much
> questioning upstream here, but I worry a bit about the Debian package
> for tzdata: how sure can I be that the tzdata is actual (wrt upstream)?

Regular stable updates (via stable/updates, not only point releases)
happen for that package, in addition to regular uploads to unstable.
See the timeline in:
https://tracker.debian.org/pkg/tzdata

>From what I can tell, this is probably the package that's updated in
stable most consistently in the entirety of Debian. I would really
recommend that you rely on tzdata directly, this will also save the
release team a lot of work. (It's much easier for them to approve
just a single package than 100 packages that need the time zone
and/or leap second information.)

Regards,
Christian



Re: Finding the correct alignment for all architectures

2016-10-12 Thread Christian Seiler
Hi,

On 10/12/2016 11:51 PM, Thomas Weber wrote:
> I am maintaining lcms2. In #749975, I received a patch to ensure correct
> alignment for doubles von MIPS. I have forwarded the patch upstream[1], but
> in the latest release, upstream has chosen a different way. It is now
> possible to configure the alignment via a preprocessor variable
> CMS_PTR_ALIGNMENT[2]:
> // Alignment to memory pointer
> 
> // (Ultra)SPARC with gcc requires ptr alignment of 8 bytes
> // even though sizeof(void *) is only four: for greatest flexibility
> // allow the build to specify ptr alignment.
> #ifndef CMS_PTR_ALIGNMENT
> # define CMS_PTR_ALIGNMENT sizeof(void *)
> #endif
> 
> #define _cmsALIGNMEM(x)  (((x)+(CMS_PTR_ALIGNMENT - 1)) & ~(CMS_PTR_ALIGNMENT 
> - 1))
> 
> I would like to drop the Debian-specific patch. But what value for
> CMS_PTR_ALIGNMENT would be good/sufficient on all arches?

Use _Alignof(type), that will always be correct. :-)

For example:

#define POINTER_ALIGNMENT_Alignof(void *)
#define DOUBLE_ALIGNMENT _Alignof(double)

Technically, this was introduced in C11/C++11, so if you
need to support really old compilers, this may be problematic,
but gcc/clang have supported that for a while. (A quick test
tells me that gcc and clang from Jessie already support it.)

Regards,
Christian



Re: NFS_SUPER_MAGIC portability

2016-09-25 Thread Christian Seiler
On 09/25/2016 03:12 PM, Ole Streicher wrote:
> I have the problem that in a package (casacore) there is basically the
> following code:
> 
> -8<
> #include 
> #include 
> 
> Bool Directory::isNFSMounted() const
> {
>struct statfs buf;
>if (statfs (itsFile.path().expandedName().chars(), ) < 0) {
>   throw (AipsError ("Directory::isNFSMounted error on " +
> itsFile.path().expandedName() +
> ": " + strerror(errno)));
>}
>return buf.f_type == NFS_SUPER_MAGIC;
> }
> -8<
> 
> The linux include subdir is obviously only available on Linux archs, not
> on kfreebsd or hurd. From the "statfs" manpage, I had the impression
> that the second include is just not needed; however then NFS_SUPER_MAGIC
> is not available.
> 
> So how do I do this portable (so that I could forward it to upstream as
> well)?

There's no easy way to make that portable. NFS_SUPER_MAGIC is Linux-
specific. statfs() is actually non-portable, and on e.g. FreeBSD
kernels the structure is slightly different (and you need to include
sys/param.h and sys/mount.h instead). Also, at least in a FreeBSD
10.3 VM of mine, the f_type field on an NFS mount is 0x3a - but I
have no idea whether that's guaranteed to be stable or is just some
number assigned dynamically. OTOH, struct statfs on FreeBSD has
f_fstypename, which is "nfs" for NFS mounts. Also, f_flags on
FreeBSD has MNT_LOCAL if it's a local mount, so you might want to
check that instead. On Hurd I have no idea, I've never tried NFS
there (but support exists).

So the only thing you can actually do realistically is to use lots
of #ifdefs, because file system detection is inherently unportable.

Basically, you'd need to do something like (untested, written down
in this email client):

--
static int is_on_nfs (const char *file);

#ifdef __linux__

#include 
#include 

int is_on_nfs (const char *file)
{
  struct statfs buf;
  if (statfs (file, ) < 0)
return -1;
  return buf.f_type == NFS_SUPER_MAGIC;
}

#elif defined(__FreeBSD_kernel__)

#include 
#include 
#include 

int is_on_nfs (const char *file)
{
  struct statfs buf;
  if (statfs (file, ) < 0)
return -1;
  return strcmp (buf.f_fstypename, "nfs") == 0;
}

#elif defined(__hurd__)

/* something else on Hurd */

#else

/* no idea how to detect NFS on this OS */

int is_on_nfs (const char *file)
{
  (void) file;
  return 0;
}

#endif

Bool Directory::isNFSMounted() const
{
   int result = is_on_nfs(itsFile.path().expandedName().chars());
   if (result < 0)
  throw (AipsError ("Directory::isNFSMounted error on " +
itsFile.path().expandedName() +
": " + strerror(errno)));
   }
   return result != 0;
}
--

The much more important question is: why do need this detection? Because
typically you want to detect NFS for one of the following reasons:

 - network filesystem -> slow -> don't do too much I/O there

 - lacks specific guarantees / features

In either case, NFS is not the only contender for this though. On Linux
there are lots of other possibilities here: there are other network
filesystems in the kernel, there are local filesystems that don't follow
POSIX guarantees (think e.g. vfat), there are a ton of FUSE filesystems
out there that support a varying number of features required by POSIX.
Plus, FUSE filesystems exist for both network (e.g. sshfs) or local
(think ntfs-3g). On Hurd, you can have arbitrary translators provide
the backing of a specific directory, and different translators support
a different degree of POSIX features.

Therefore, it might be a good idea to know _why_ you want to check for
NFS here? What's the use case? Perhaps there's a better and more
portable way to check for that specific thing.

Regards,
Christian



Re: atomic_LIBS

2016-09-10 Thread Christian Seiler
On 09/10/2016 03:22 PM, Muri Nicanor wrote:
> On 09/06/2016 12:44 PM, Christian Seiler wrote:
> [...]
>> I didn't think about adding -latomic to the linker flag list
>> directly via -Wl. I just tested your suggestion and it's really
>> funny; libtool does mangle your line and separate it into:
>>
>>  -Wl,--push-state -Wl,--as-needed -Wl,-latomic -Wl,--pop-state
>>
>> but since there's no direct -l argument, it actually does work
>> and the things are kept together and in order.
>>
>> @Muri: use this line in the patch instead:
>> AC_CHECK_LIB([atomic], [__atomic_add_fetch_8], 
>> [atomic_LIBS="-Wl,--push-state,--as-needed,-latomic,--pop-state"], 
>> [atomic_LIBS=""]) 
>>
>> That way, the libatomic dependency will only be picked up on
>> platforms where it's necessary.
> 
> i've created a pull request for that change upstream[0], but the ci
> seems not to like the patch:
> https://travis-ci.org/dkopecek/usbguard/builds/158517934 - i'm not sure
> what to make of that, i don't really see a difference in the successfull
> builds and the ones that failed.

Ah, they are apparently using a version of binutils that doesn't
support --push-state. In that case, you should use the following
in configure.ac:

AC_CHECK_LIB([atomic], [__atomic_add_fetch_8], [
  __saved_LIBS="$LIBS"
  LIBS="$LIBS -Wl,--push-state,--as-needed,-latomic,--pop-state"
  AC_LINK_IFELSE([AC_LANG_PROGRAM()],
[atomic_LIBS="-Wl,--push-state,--as-needed,-latomic,--pop-state"],
[atomic_LIBS="-latomic"]
  )
  LIBS="$__saved_LIBS"
], [atomic_LIBS=""])
AC_SUBST([atomic_LIBS])

That will check if the linker flags for --as-needed work, and if
they don't, -latomic is just added unconditionally. This is not
ideal on platforms where libatomic is available, but not required
and ld doesn't support --push-state (because then a spurious
dependency on libatomic will be added to the compiled program),
but at the very least will that always produce a working binary.

Regards,
Christian



Re: FTBFS: how to test fixes

2016-09-06 Thread Christian Seiler
On 09/06/2016 11:57 AM, Jakub Wilk wrote:
> * Christian Seiler <christ...@iwakd.de>, 2016-09-05, 20:33:
>> Also note that there are plans to make init non-Essential in the future,
> 
> The future is now! init is non-essential already. You can remove it
> from your unstable chroot if you want to.

Oh cool, didn't know that was already done. Then this just
means that the buildd chroots for most archs (except apparently
hppa) were only upgraded but never rebuilt from scratch - so
the fact that the package builds at all at the moment is an
artifact of how buildd chroots are maintained. ;-)

>> MIPS (at least 32bit) doesn't support 64bit atomic operations
>> intrinsically (_8 == 8 bytes) - and your software uses
>> std::atomic (found that by grepping).
>>
>> However, gcc provides an emulation library called libatomic. You
>> should link against that emulation library if present in order to
>> use those intrinsics.
> 
> You shouldn't need to care about this. This should be the compiler's
> job.

You're right, I agree. I'll file a bug against gcc later.

>> This might result in a spurious dependency on libatomic on other
>> platforms, but unfortunately I don't know of any way to properly
>> pass --as-needed for just this library without libtool reordering
>> the entire list of linker flags. :-(
> 
> Not tested against libtool, but this should do the trick:
> 
> -Wl,--push-state,--as-needed,-latomic,--pop-state
> 
> (Since this is just one g++ argument, libtool doesn't have room to
> reorder much.)

Hrmpf, my try yesterday was

  -Wl,--push-state,--as-needed -latomic -Wl,--pop-state

I didn't think about adding -latomic to the linker flag list
directly via -Wl. I just tested your suggestion and it's really
funny; libtool does mangle your line and separate it into:

 -Wl,--push-state -Wl,--as-needed -Wl,-latomic -Wl,--pop-state

but since there's no direct -l argument, it actually does work
and the things are kept together and in order.

@Muri: use this line in the patch instead:
AC_CHECK_LIB([atomic], [__atomic_add_fetch_8], 
[atomic_LIBS="-Wl,--push-state,--as-needed,-latomic,--pop-state"], 
[atomic_LIBS=""]) 

That way, the libatomic dependency will only be picked up on
platforms where it's necessary.

Regards,
Christian



Re: FTBFS: how to test fixes

2016-09-05 Thread Christian Seiler
On 09/05/2016 08:59 PM, Muri Nicanor wrote:
> On 09/05/2016 08:33 PM, Christian Seiler wrote:
>>Since you depend on systemd.pc, which is part of the
>>systemd package, just Build-Depend on systemd to make
>>systemd.pc available. You won't need porterbox access
>>to fix that issue. (Btw. libsystemd.pc != systemd.pc)
> 
> ah, that comment in paranthesis helped me to understand the problem ;) i
> was looking at the wrong package and was wondering what to do, because
> there is no official libsystemd-dev package for hppa. thanks for
> pointing that out! ;)

Huh? There is libsystemd-dev on hppa, it's just out of date
at the moment:
https://packages.debian.org/unstable/libsystemd-dev
(231-3 instead of 231-5)

Note that hppa is a non-official port, so e.g. rmadison won't
show it.

See:
https://www.ports.debian.org/

+ the systemd directory in the hppa ports pool:
http://ftp.ports.debian.org/debian-ports/pool-hppa/main/s/systemd/

Regards,
Christian



Re: FTBFS: how to test fixes

2016-09-05 Thread Christian Seiler
On 09/05/2016 07:20 PM, Andrey Rahmatullin wrote:
> On Mon, Sep 05, 2016 at 07:07:51PM +0200, Muri Nicanor wrote:
>> so, i've got my first two FTBFS bugs (on mips and hppa)- what the
>> recommended way of testing fixes for architectures i don't have
>> testmachines of?
> Porterboxes. See https://dsa.debian.org/doc/guest-account/ about getting
> access for non-DDs.

Note that there are no official hppa porterboxes. You can ask on
the debian-hppa mailing list for access to an unofficial one
though.

But speaking of the bugs, they don't actually require porterbox
access.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836713

   The hppa build chroots don't have systemd installed (for
   whatever reasaon), in contrast to chroots on most other
   architectures.

   Since you depend on systemd.pc, which is part of the
   systemd package, just Build-Depend on systemd to make
   systemd.pc available. You won't need porterbox access
   to fix that issue. (Btw. libsystemd.pc != systemd.pc)

   Also note that there are plans to make init non-Essential
   in the future, so more build chroots will not have
   systemd preinstalled in them, so the problem you're seeing
   on hppa now is going to be a problem on all archs sooner
   or later.

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=836712

   MIPS (at least 32bit) doesn't support 64bit atomic
   operations intrinsically (_8 == 8 bytes) - and your software
   uses std::atomic (found that by grepping).

   However, gcc provides an emulation library called libatomic.
   You should link against that emulation library if present
   in order to use those intrinsics.

   I've attached a patch against your package (add it as a quilt
   patch) that checks for the availability of libatomic and adds
   it to the linker flags. This might result in a spurious
   dependency on libatomic on other platforms, but unfortunately
   I don't know of any way to properly pass --as-needed for just
   this library without libtool reordering the entire list of
   linker flags. :-(

   I've build-tested (including test suite) on amd64 and mipsel
   (qemu-user though) and the patch fixes the error.

Regards,
Christian
--- a/Makefile.am
+++ b/Makefile.am
@@ -134,7 +134,8 @@ libusbguard_la_LIBADD=\
 	@json_LIBS@ \
 	@udev_LIBS@ \
 	@crypto_LIBS@ \
-	@pegtl_LIBS@
+	@pegtl_LIBS@ \
+	@atomic_LIBS@
 
 libusbguard_la_SOURCES=\
 	src/Common/Thread.hpp \
--- a/configure.ac
+++ b/configure.ac
@@ -71,6 +71,13 @@ AM_PROG_LIBTOOL
 AC_PROG_LIBTOOL
 
 #
+# Check if libatomic is available, might be required for emulating
+# atomic intrinsics on some platforms.
+#
+AC_CHECK_LIB([atomic], [__atomic_add_fetch_8], [atomic_LIBS="-latomic"], [atomic_LIBS=""])
+AC_SUBST([atomic_LIBS])
+
+#
 # Checks for required libraries.
 #
 PKG_CHECK_MODULES([udev], [libudev >= 200],


Re: d/control: Depends on same version

2016-09-04 Thread Christian Seiler
On 09/04/2016 09:40 PM, Muri Nicanor wrote:
> if i have source package foo-x.y that builds binary packages foo_x.y and
> libfoo_x.y, how can i declare a dependency from foo on libfoo where
> libfoo has to be the same version of foo?

If both are Arch: any (or linux-any or something similar):

Depends: libfoo (= ${binary:Version})

However, if one of the packages is Arch: all, and if you want to be
binNMU-friendly, you should probably rather use something like

Depends: libfoo (>= ${binary:Version}), libfoo (<< ${binary:Version}+b+~)

(Don't use it for the case where both are Arch: any though.)

Regards,
Christian



Re: Daemon config update

2016-08-31 Thread Christian Seiler
Hi,

to add two more comments :


Am 31. August 2016 18:55:08 MESZ, schrieb Christian Seiler <christ...@iwakd.de>:
>Am 31. August 2016 13:51:41 MESZ, schrieb Dmitry Bogatov
><kact...@ruggedinbox.com>,
>> If not, how should I tell user,
>>that default configuration changes and they may want to restart daemon
>>manually?
>
>If anything relevant changes, add a NEWS file to your package, that's
>the accepted convention for informing users. Typically you'd
>auto-restart on updates regardless.

I should also mention that if the changes are quite critical, you can use 
debconf prompts to inform the user or maybe even ask them what to do, depending 
on the situation. Don't overdo it though, because users don't want to see too 
many of them.

Finally: remember that invasive changes will only happen for testing/sid users, 
and when someone dist-upgrades from e.g. oldstable to stable. In both cases 
people are expected to pay attention. So it's not like changes will catch 
people by surprise. (And anyone running unattended-upgrades on testing/sid has 
no right to be surprised. ;-))

Regards,
Christian



Re: Daemon config update

2016-08-31 Thread Christian Seiler
Hi,

Off topic: I initially replied to both you and the list, but your address 
doesn't seem to exist. Just as a heads-up, in case that's unintentional.

Regards,
Christian


Re: Daemon config update

2016-08-31 Thread Christian Seiler
Hi,

Am 31. August 2016 13:51:41 MESZ, schrieb Dmitry Bogatov 
,
>Please share best practices on daemon configuration upgrade -- should
>I restart (no reload, unfortunatelly) daemon, when I upgrade it?

In general: yes. Even if your daemon supports reload, you should restart it, 
because you've installed a new version and don't want the old version running.

There are two ways of doing so: stop in preinst and start again in postinst, or 
keep running during upgrade and restart in postinst. The latter is preferred, 
because it keeps the downtime low, but some daemons can't cope with files being 
replaced on disk while they are running (especially if they consist of multiple 
binaries). See the manpages for dh_installinit and dh_systemd_start for 
details. If for some reason you need to do so manually, please respect users' 
policy-rc.d, e.g. by only using invoke-rc.d.

There are some special cases though: obviously you can't restart pid 1, so both 
sysvinit and systemd (and likely others) support an operation called 'reexec', 
in which they serialize their state and call exec() on the new binary. You 
could also implement that for non pid 1 processes, but people typically don't 
bother.

With systemd (and maybe other inits) you can also save your state (e.g open 
FDs) in pid 1 and gather it again when starting. This would also be a way to 
minimize disruptions. Outside of systemd components themselves though, I 
haven't seen anything in the wild yet that makes us of that. (And you'd still 
need to do a regular restart on non-systemd systems, so you can't purely rely 
on that.)

>Restart can be disruptive to user.

Yes, but since it's Debian policy (or at the very least convention) to restart 
stuff on upgrades, users will expect that.

> If not, how should I tell user,
>that default configuration changes and they may want to restart daemon
>manually?

If anything relevant changes, add a NEWS file to your package, that's the 
accepted convention for informing users. Typically you'd auto-restart on 
updates regardless.

That all said: there may be cases where you can't (sanely) upgrade a daemon. 
For example, if you have a network storage daemon responsible for the rootfs: 
in that case you might not be able to do that without crashing the system. (It 
depends though; I co-maintain open-iscsi, which does support restarts, even 
when the rootfs is on iSCSI.)

In the end, you have a bit of discretion as package maintainer what the best 
thing is for your package.

tl;dr: err on the side of restarting, but there are legitimate exceptions.

Hope that helps.

Regards,
Christian




Re: Any idea why bitbucket watch file does not work?

2016-08-31 Thread Christian Seiler
On 08/31/2016 09:44 AM, Andreas Tille wrote:
> I was following the Wiki[1] to get a bitbucket watch for metaphlan2[2].
> Unfortunately uscan does not detect any match and after starring on the
> code and trying several other regexp I failed finding the mistake.
> 
> Any idea how to get the watch file working and reporting 2.6.0 as latest
> version?

Well, uscan isn't psychic, it just uses what's available on a given
download page.

If you go to

https://bitbucket.org/biobakery/metaphlan2/downloads

you'll see that there are three links:

 - bitbucket's autogenerated link for downloading the entire
   repository

 - two .txt files that have nothing to do with the release

In fact, there don't appear to be any upstream tarballs available.

However, they do appear to tag their releases, so (not tested though)
the following watch file should work for the autogenerated tarballs
from bitbucket based on tags:

https://bitbucket.org/biobakery/metaphlan2/downloads?tab=tags 
.*/(\d\S*)\.tar\.gz

HTH.

Regards,
Christian



Re: gcc-6 and sip help (bug 812138)

2016-08-15 Thread Christian Seiler
On 08/15/2016 12:43 PM, Gudjon I. Gudjonsson wrote:
> This fails to compile with the following message:
> 
> make[2]: Entering directory '/home/gudjon/nb/pyqwt3d/
> pyqwt3d-0.1.7~cvs20090625/build/py2.7-qt4/configure/OpenGL_Qt4'
> g++ -c -g -O2 -fstack-protector-strong -Wformat -Werror=format-security  -
> Wdate-time -D_FORTIFY_SOURCE=2 -fPIC -O2 -Wall -W -D_REENTRANT -DNDEBUG -
> DGL2PS_HAVE_ZLIB -DHAS_NUMPY -DQT_NO_DEBUG -DQT_OPENGL_LIB -I. -I/usr/include/
> qwtplot3d-qt4 -I/usr/include/python2.7 -I/usr/lib/python2.7/dist-packages/
> numpy/core/include -I/usr/include/qt4/Qt -I/usr/share/qt4/mkspecs/default -I/
> usr/include/qt4/QtOpenGL -I/usr/include/qt4 -I/usr/X11R6/include -o 
> sipOpenGLcmodule.o sipOpenGLcmodule.cpp
> sipOpenGLcmodule.cpp:4445:1: error: narrowing conversion of ‘4294967295u’ 
> from 
> ‘unsigned int’ to ‘int’ inside { } [-Wnarrowing]
>  };

Beginning with gcc 5, when using the newest C++ standard, narrowing
conversions of constants are an error:

https://gcc.gnu.org/onlinedocs/gcc/C_002b_002b-Dialect-Options.html
   (search for "-Wnarrowing")
https://gcc.gnu.org/wiki/FAQ#Why_does_GCC_not_give_an_error_for_some_narrowing_conversions_within_list-initializations_as_required_by_C.2B-.2B-11_.28-Wnarrowing.29_.3F

Starting with gcc 6, the default C++ standard the compiler uses is
now gnu++14 (i.e. C++14 + GNU extensions). IIRC the default standard
the compiler assumed previously was still C++98. (I may be wrong
though.)

Now obviously, the problem is with the original sip file: int is
simply the wrong data type for these constants. You should use
unsigned int instead.

However, it appears that only two constants are affected here, and
the error message is actually quite misleading about the position
of the error, because sip creates a large data structure, and the
position where the error is thrown is at the end of the definition
of the data structure. If you investigate the constants that are
defined in that data structure further though, you'll realize that
only two are actually >= 0x8000, i.e. don't fit in a signed int
anymore.

Those are:

GL_CLIENT_ALL_ATTRIB_BITS
GL_ALL_ATTRIB_BITS

which both are defined as 0x in GL/gl.h, to indicate that
all bits (that fit into 32bit int[0]) are meant.

The proper fix would probably be to replace all int constants with
unsigned int constants, but that has two issues:

 - sip doesn't appear to make a difference between int and unsigned
   int (it uses int internally), so that doesn't actually help

 - you don't want to touch the entire source file
   (ok, you could touch only those two constants, but that would
   be asymmetric)

You could probably resort to long long as a data type (long is not
sufficient on 32bit platfors), but that's really not a good idea
IMHO.

The easiest solution is probably to add -Wno-narrowing to the
compiler flags. Now due to the fact that how your pacakge build
works [1], you actually have to add 

  --extra-cxxflags="-Wno-narrowing"

to the configure.py invocation in debian/rules (for example you can
put it before --extra-libs=...). That disables the compiler
diagnostic and makes the package compile again. I just tried that
and it does build the package. [2]

However, and this was actually also a bug in the previous version
of the package, the two constants are defined "wrongly" in Python:

>>> import PyQt4.Qwt3D.OpenGL
>>> PyQt4.Qwt3D.OpenGL.GL_CLIENT_ALL_ATTRIB_BITS
-1
>>> PyQt4.Qwt3D.OpenGL.GL_ALL_ATTRIB_BITS
-1

(Should be 4294967295, if they really were unsigned.)

It's not terribly problematic, because using those constants with
a bitwise AND in python will result in the correct answer:

>>> PyQt4.Qwt3D.OpenGL.GL_ALL_ATTRIB_BITS & PyQt4.Qwt3D.OpenGL.GL_2D == 
>>> PyQt4.Qwt3D.OpenGL.GL_2D
True

But a really proper fix for this would entail:

 - file a bug against sip to support unsigned int properly
 - once that's fixed, remove the -Wno-narrowing against in your
   package and use unsigned int for all of the constants, not a
   signed int (at least for those constants that are obviously
   meant to be used in a bit field context)

Hope that helps.

Regards,
Christian

[0] Which in theory is not completely portable, because int could
be larger or smaller than 32 bits (though in practice it isn't
on any arch Debian supports), the right way to define a
constant for "all bits in an unsigned int" would be to use
#define CONSTANT_NAME (~0u)
However, that's probably something one should tell OpenGL
upstream, because it's there that that constant is defined in
that way.

[1] Btw. the entire CFLAGS = logic in debian/rules you can drop,
because a) you're compiling C++ code, so CXXFLAGS would be
relevant and b) the way python builds its modules is that it
always uses the compiler and flags that python itself was
compiled with (and it's non-trivial to override that), so
even CXXFLAGS would likely have no effect at all.

[2] There's a reason for this 

Bug#830788: RFS: ifstat/1.1-9

2016-08-08 Thread Christian Seiler
On 08/08/2016 10:36 AM, Goswin von Brederlow wrote:
> Ifstat upstream is alife and responsive. The command is just complete,
> no new features have been added. So I guess we should keep ifstat, if
> only for kfreebsd and hurd.

Maybe the best idea would then be to have iproute2 ship the
ifstat utility as iproute2-ifstat (or similar), keep ifstat
as the name for the ifstat package, and if a Linux admin
wants to have ifstat be the iproute2 thing, they can easily
set an alias in their shell. (Maybe add instructions to a
README.Debian file.)

Regards,
Christian



Bug#827933: RFS: yabar/0.4.0-3 [ITP]

2016-08-02 Thread Christian Seiler
On 07/27/2016 03:28 AM, Sean Whitton wrote:
> 13. Why a 'low' upload urgency?  Counterintuitively, this means that you
> think the package is more likely than usual to be buggy and so it should
> take longer to migrate to testing; it doesn't actually mean "less
> important".  Unless you think the upload is buggy, you should use
> priority=medium.

I disagree: this is a new package (ITP), and I think it is appropriate
to have urgency=low for these, even if you think they are completely
bug-free. Existing packages in unstable are much more likely to be
tested sooner by users (and find bugs that the maintainer didn't find
before uploading), just because that only involves upgrading your
system, which many sid users do regularly. But new packages need to
be explicitly installed by people first, which takes additional time.

Also, I disagree on another level: if you think your upload is buggy,
you shouldn't upload it at all (unless it's less buggy than the
version in the archive), but fix the bugs first. ;-) urgency=low for
existing packages is IMHO a good idea if you have done major changes
to the package and while you believe everything is correct, you'd
like to have a bit more time for people to test and find flaws. Or
if for example upstream has released a new major version and while
you are confident that it won't break anything, you want to be on
the safe side.

IMHO, of course.

Regards,
Christian



Re: Request for access to porterbox

2016-07-28 Thread Christian Seiler
On 07/28/2016 08:52 AM, Adam Borowski wrote:
> On Thu, Jul 28, 2016 at 12:56:11AM +0200, Christian Seiler wrote:
>> That works now? When I set up a SH4 chroot a while back, I had to
>> use the qemu-sh4-static binary from the i386 version of the
>> qemu-user-static package, because the amd64 version was broken.
>> (Luckily, static linking.)
> 
> #805827 which is an ex-bug.

Cool. :)

>> Plus, aptitude is broken on many (but not all) archs when used
>> together with qemu-user-static (segfaults), so if you use that
>> kind of chroot together with pbuilder, in my experience you need
>> to revert to the classic satisfydepends (which is much slower)
>> to make pbuilder work properly.
> 
> As debootstrap uses regular apt rather than aptitude, why would this be a
> concern?

For debootstrap? No. For pbuilder? Yes.

> And aptitude's dependency resolution is broken more often than
> not, so sticking with apt is more reliable also on the installed system.

If you are using pbuilder, then it defaults to aptitude to satisfy
the build dependencies of a package. There are alternatives, such
as the classic (shell script based) resolution scheme, but they
have some problems. My point was that if you want to use such a
chroot to automatically build packages via pbuilder, then you need
to tell pbuilder to not use aptitude - while for native archs the
default works well.

>> ppc64el needs QEMU_CPU=POWER8. (qemu-static-ppc64 and -ppc64el
>> are basically the same save for endianness, but Debian's pp64el
>> port requires a POWER8 CPU at least, whereas the ppc64 port runs
>> on POWER5 and higher IIRC.)
> 
> #813698, an ex bug.

Cool. :)

Regards,
Christian



Re: Request for access to porterbox

2016-07-27 Thread Christian Seiler
On 07/28/2016 12:56 AM, Christian Seiler wrote:
> qemu-debootstrap fails for m68k with Illegal Instruction at the
> beginning of debootstrap --second-stage. I did get a working
> chroot by fiddling with stuff for a while manually IIRC (not on
> the computer I'm currently on, I'd have to look that up), but I
> don't remember what I did.

Actually, I remember now: you need to compile a custom version of
qemu-m68k-static because upstream QEMU doesn't support the CPUs
that Debian's port requires. It's even described on the Debian
Wiki, see:
https://wiki.debian.org/M68k/sbuildQEMU

Regards,
Christian



Re: Request for access to porterbox

2016-07-27 Thread Christian Seiler
On 07/28/2016 12:21 AM, Adam Borowski wrote:
> On Wed, Jul 27, 2016 at 11:07:17PM +0200, Christian Seiler wrote:
>> m68k and sh4 do work in qmeu-user-static chroots, the setup
>> is not quite as trivial however. (I can give you a tarball
>> that will work in pbuilder and schroot though.)
> 
> For qemu-user for sh4 it's:
> 
> CHROOT=/srv/chroots/sh4   #name the chroot here
> apt-get -y install debian-ports-archive-keyring qemu-user-static
> btrfs subv create "$CHROOT" || mkdir "$CHROOT"
> mkdir -p "$CHROOT/usr/bin"
> cp -p /usr/bin/qemu-sh4-static "$CHROOT/usr/bin/"
> debootstrap --arch=sh4 \
> --keyring=/usr/share/keyrings/debian-ports-archive-keyring.gpg \
> unstable "$CHROOT" http://ftp.ports.debian.org/debian-ports/

That works now? When I set up a SH4 chroot a while back, I had to
use the qemu-sh4-static binary from the i386 version of the
qemu-user-static package, because the amd64 version was broken.
(Luckily, static linking.)

qemu-debootstrap fails for m68k with Illegal Instruction at the
beginning of debootstrap --second-stage. I did get a working
chroot by fiddling with stuff for a while manually IIRC (not on
the computer I'm currently on, I'd have to look that up), but I
don't remember what I did.

Plus, aptitude is broken on many (but not all) archs when used
together with qemu-user-static (segfaults), so if you use that
kind of chroot together with pbuilder, in my experience you need
to revert to the classic satisfydepends (which is much slower)
to make pbuilder work properly.

Also, if you have debian-ports for the binary packages, you still
need the normal archive for the source packages, so the deb and
deb-src lines (if you want to add both to sources.list) diverge.

> Same works for all other archs supported by qemu, other than powerpcspe
> (needs QEMU_CPU=e500v2).

mips64el needs QEMU_CPU=mips64dspr2. (Most stuff works without
that env var, but some things don't.)

ppc64el needs QEMU_CPU=POWER8. (qemu-static-ppc64 and -ppc64el
are basically the same save for endianness, but Debian's pp64el
port requires a POWER8 CPU at least, whereas the ppc64 port runs
on POWER5 and higher IIRC.)

I have never gotten sparc64 to work in qemu-user-static, that I
only got working with qemu-system. (Debian Wiki contains some
instructions for that though.)

> Non-linux archs need qemu-system rather than qemu-user.  As all of them are
> x86, you want -enable-kvm, or, if you're scared by qemu-system, virtualbox.

I would always recommend libvirt + virt-manager for VMs (have
been using that since Squeeze, works *really* well, I have even
migrated running VMs between hosts since Squeeze with libvirt +
virsh, without any problems).

Regards,
Christian



Re: Request for access to porterbox

2016-07-27 Thread Christian Seiler
On 07/27/2016 07:41 PM, Dmitry Bogatov wrote:
> Hello, I am looking for DD to sponsor my request for access a portbox
> to debug #832544, #832543.
> Architectures requested: hurd, m68k, sh4

If you have access to x86 hardware (as most people do), you can
run Hurd in a VM - no need for a porterbox.

See https://wiki.debian.org/Debian_GNU/Hurd at the very top
under "installation and testing". Caveat: IIRC virtio doesn't
work on Hurd, so you need to emulate an IDE disk drive plus
a non-virtio network adapter for that to work.

m68k and sh4 do work in qmeu-user-static chroots, the setup
is not quite as trivial however. (I can give you a tarball
that will work in pbuilder and schroot though.) Also, gdb
and strace in qemu-user chroots are non-trivial to use.
Ask me if you're interested.

Regards,
Christian



Re: systemd WantedBy= target changed - canonical way to clean up old .wants symlinks?

2016-07-27 Thread Christian Seiler
On 07/27/2016 10:21 PM, Patrick Schleizer wrote:
> Hi!
> 
> When changing a systemd WantedBy= target...
> 
> Is there a canonical way to clean up the old .wants symlinks?
> 
> These are not automatically removed on package upgrades. Considered a
> bug or feature? :)

Bug, I reported that a while back:
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=797108
Unfortunately, I lost track of that at some point...

*re-add to TODO list*

> rm_conffile?

Probably just rm -f in postinst (with version check).

Regards,
Christian



Re: Request for access to porterbox

2016-07-27 Thread Christian Seiler
On 07/27/2016 09:17 PM, Andrey Rahmatullin wrote:
> On Wed, Jul 27, 2016 at 08:41:51PM +0300, Dmitry Bogatov wrote:
>> Rationaly: reproduce #832544, #832543
> I wonder why these bugs are important and not wishlist.

Because they are FTBFS bugs on non-release archs and hence should
be of severity important? (Just as FTBFS on release archs are
considered "serious".) important doesn't block testing migration,
so it doesn't impede anything even if you have -ENOTIME to fix
it right now.

(Developers Reference explicitly states: "be kind to porters".
I consider severity: important for FTBFS to be part of that.)

The only case where I think wishlist is appropriate if you have
packages that explicitly declare a subset of architectures (e.g.
because they contain hand-crafted assembly code for each arch
or are a compiler or similar) instead of Arch: any.

Regards,
Christian



Re: Preliminary questions for sponsoring a compiler

2016-07-25 Thread Christian Seiler
On 07/25/2016 09:28 PM, Albert van der Horst wrote:
> Christian Seiler schreef op 2016-07-25 18:14:
>> I don't quite get what you mean, I never had any problem with
>> that.
> 
> {Probably going off topic here]
> A pure assembler file means full control over section names, their
> properties (in particular, readable, writable *and* executable) and
> where they are located. My compilation feature requires a stable
> layout for the ELF header.

Ah, ok, I didn't know you'd want that much control. Personally
I tend to rely on relocations and ld resolving stuff for me,
so I don't really care about the detailed ELF file layout, even
when using assembler. But if that doesn't fit your use case,
then yeah, GNU ld is probably not what you want.

> By the way write.s gives me  `` bad register name `%rax' ''

Well, then your gcc is probably not x86_64 by default. ;-)
Since assembler is different for every processor, I just took
the most obvious architecture I know.

Regards,
Christian



Re: Bug#832299: python-ruffus: FTBFS: sphinx.ext.mathjax: other math package is already loaded

2016-07-25 Thread Christian Seiler
Control: block 832299 by 827806

On 07/25/2016 02:02 PM, Andreas Tille wrote:
> Format: "jpg" not recognized. Use one of: canon cmap cmapx cmapx_np dot eps 
> fig gd gd2 gv imap imap_np ismap pdf pic plain plain-ext png pov ps ps2 svg 
> svgz tk vml vmlz x11 xdot xdot1.2 xdot1.4 xlib
> ' returned non-zero exit status 1

This is bug #827806:

https://bugs.debian.org/827806

I've done some investigation and added a comment to that bug report
that will hopefully help in fixing it.

Since your package does nothing wrong here, I don't think you should
do anything. If you really want your package to build _now_ for some
other reason, you could try to switch the dot format to png instead
of jpg (I would recommend png instead of jpg anyway for graphviz,
because jpg is more suitable for photos and not diagrams).

Regards,
Christian



Re: Preliminary questions for sponsoring a compiler

2016-07-25 Thread Christian Seiler
On 07/25/2016 02:51 PM, Albert van der Horst wrote:
> The problem is: ld is not stable w.r.t. linking pure assembler files.

How so?

$ cat write.s
.text

.global _start
.type   _start, @function

_start:
mov $1, %rax
mov %rax, %rdi
lea str, %rsi
mov strsize, %rdx
syscall
xor %rdi, %rdi
mov $60, %rax
syscall

.data
str:
.string "Hello World!\n"
strsize:
.quad   .-str
$ gcc -Wall -c -o write.o write.s && ld -o write write.o
$ ./write
Hello World!
$ strace ./write
execve("./write", ["./write"], [/* 58 vars */]) = 0
write(1, "Hello World!\n\0", 14Hello World!
)= 14
_exit(0)= ?
+++ exited with 0 +++
$ cat write-nasm.s
section .text
global  _start

_start:
mov rax, 0x1
mov rdi, rax
lea rsi, [hello]
mov rdx, length
syscall
xor edi, edi
mov rax, 60
syscall


section .data
hello   db  'Hello World!', 10, 0
length  equ $ - hello
$ nasm -f elf64 -o write-nasm.o write-nasm.s
$ ld -o write-nasm write-nasm.o
$ ./write-nasm
Hello World!
$ strace ./write-nasm
execve("./write-nasm", ["./write-nasm"], [/* 58 vars */]) = 0
write(1, "Hello World!\n\0", 14Hello World!
)= 14
_exit(0)= ?
+++ exited with 0 +++

I don't quite get what you mean, I never had any problem with
that.

Regards,
Christian



Re: reproducible-builds

2016-07-19 Thread Christian Seiler

Am 2016-07-19 11:29, schrieb Dominique Dumont:

On Monday, July 18, 2016 6:20:51 PM CEST Herbert Fortes wrote:

dvbackup


Is this package worth the effort ?


Not a user myself, but the package is already in the archive (it's
not an ITP), and I think reproducibility for _all_ of Debian is a
goal we should want to obtain. I don't think having a package that
does not build reproducibly in Debian should be a thing in the
long run: either the package is made reproducible, or the package
should be removed from the archive. Obviously this won't happen
any time soon, because we still aren't close enough to 100% and
some infrastructure bits are still missing. But I believe this
really should be a long-term goal.

Is there anyone left who use DV tapes to perform backups when a 16GB 
thumb
drive has more capacity and is more practical for this purpose than a 
DV

camcorder ?


I doubt that many people will want to use this tool to create a
new backup solution - but as this package has been in the archive
for more than a decade, many people might still have old backups
on DV, and may want to read their old backups with a current
Debian version.

Regards,
Christian



  1   2   >