Re: Bits from the FTPMaster meeting

2009-11-28 Thread Guillem Jover
On Tue, 2009-11-24 at 10:25:59 +, Neil Williams wrote:
 On Sat, 21 Nov 2009 03:01:06 +0100
 Guillem Jover guil...@debian.org wrote:
  Well, IMO any program implementing .deb extraction w/o using something
  like --fsys-tarfile, --extract or --control from dpkg-deb (until we
  have the upcoming libdpkg...), should be prepared to handle the format
  described in deb(5), and deserves a bug otherwise. The fact that the
  Debian archive only accepts a subset of the valid .deb format, or that
  we might not want to have bzip2 compressed packages in the base system
  is a matter of policy in Debian, and does not mean others might want to
  do otherwise.
 
 Fixed in multistrap 2.0.4, just arriving in sid.

Checking the svn repo I still see at least one problematic piece of
code. In “emrootfslib (unpack_debootstrap)” it's using ar + tar, and
using a temporary file when it could just use a pipe, but it could have
just used ‘dpkg-deb -X’ instead of ‘dpkg -x’ to get the list of files.
Then you get any format supported by dpkg-deb for free, in addition to
being a bit more optimal.

 I'll update deb-gview for its next release, although I'll need some
 real packages using data.tar.bz2 before I can test it.

You could repack an existing .deb using dpkg-deb and the -Z option.
But please, make sure to read deb(5) and support any valid deb package.

thanks,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-28 Thread Neil Williams
On Sat, 28 Nov 2009 22:25:31 +0100
Guillem Jover guil...@debian.org wrote:

 On Tue, 2009-11-24 at 10:25:59 +, Neil Williams wrote:
  On Sat, 21 Nov 2009 03:01:06 +0100
  Guillem Jover guil...@debian.org wrote:
   Well, IMO any program implementing .deb extraction w/o using something
   like --fsys-tarfile, --extract or --control from dpkg-deb (until we
   have the upcoming libdpkg...), should be prepared to handle the format
   described in deb(5), and deserves a bug otherwise. The fact that the
   Debian archive only accepts a subset of the valid .deb format, or that
   we might not want to have bzip2 compressed packages in the base system
   is a matter of policy in Debian, and does not mean others might want to
   do otherwise.
  
  Fixed in multistrap 2.0.4, just arriving in sid.
 
 Checking the svn repo I still see at least one problematic piece of
 code. In “emrootfslib (unpack_debootstrap)”

I did say that multistrap was fixed - emrootfslib is part of emsandbox
which is meant to serve Emdebian Crush, not multistrap. Currently, all
development on Emdebian Crush is stalled - including the root
filesystem installation methods. The current code is there to support
Lenny and has no support for Squeeze or Sid because the only packages
available for Crush are for ARM, not armel or any other Debian
architecture. There is no ARM support in Squeeze, therefore changes made
in any package after the release of Lenny have no effect on any part of
the Emdebian Crush packages or support system, including emrootfslib.
Crush development will only restart when we have a new build system
based on multiarch that can cope with more than one architecture.

http://lists.debian.org/debian-embedded/2009/08/msg5.html

It's the old single-developer-now-unavailable problem.

As I said, the issue is fixed in multistrap - which is the only place
in that SVN repo where the fix actually matters. It is not currently
possible to use emrootfslib with any package more recent than the
version released in Lenny - no other packages exist for it to use and
new packages cannot be built for it to use.

Once Crush development does restart, the systems used to generate and
install root filesystems may well migrate to multistrap anyway -
basing the multistrap on the modified packages in the Emdebian Crush
repository.

emrootfslib is a specialised component of a specialised tool for a
specialised set of packages with a specialised purpose. General changes
in dpkg have negligible impact on it.

multistrap, however, is a much more general purpose script intended to
work with Emdebian Grip. As Grip is binary-compatible with standard
Debian, multistrap works with standard Debian too - hence the fix
uploaded to Sid.

Crush 1.0 was a learning curve, a proof of concept, with only a few
hundred packages on a single architecture. Lots of parts of the build
system for Crush 1.0 will not survive into Crush 3.0; as the systems
mature, the need for such specialised tools becomes less relevant. Only
then can Crush be usable enough to support more packages and more
architectures.

A lot of that testing and development is going on now within Emdebian
Grip. Once multiarch allows us to solve the fundamental breakage in the
build system used for Crush 1.0, we can start to look to the future.

  I'll update deb-gview for its next release, although I'll need some
  real packages using data.tar.bz2 before I can test it.
 
 You could repack an existing .deb using dpkg-deb and the -Z option.

deb-gview doesn't repack anything. It inspects the contents of the .deb
directly using libarchive. deb-gview does not use dpkg.

-- 


Neil Williams
=
http://www.data-freedom.org/
http://www.nosoftwarepatents.com/
http://www.linux.codehelp.co.uk/



pgpw8hqcc1B1d.pgp
Description: PGP signature


Re: Bits from the FTPMaster meeting

2009-11-28 Thread Guillem Jover
On Sat, 2009-11-28 at 22:31:29 +, Neil Williams wrote:
 On Sat, 28 Nov 2009 22:25:31 +0100 Guillem Jover wrote:
  On Tue, 2009-11-24 at 10:25:59 +, Neil Williams wrote:
   I'll update deb-gview for its next release, although I'll need some
   real packages using data.tar.bz2 before I can test it.
  
  You could repack an existing .deb using dpkg-deb and the -Z option.
 
 deb-gview doesn't repack anything. It inspects the contents of the .deb
 directly using libarchive. deb-gview does not use dpkg.

You were asking for a package compressed with something els than gzip
for testing purposes, I was offering an easy way to get you one from
any existing package by repacking using dpkg-deb.

Still that will not give you testing coverage for all valid .debs,
like having members starting with _ inbetween the mandatory ones,
additional members afterwards, etc.

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-28 Thread Neil Williams
On Sun, 29 Nov 2009 00:13:14 +0100
Guillem Jover guil...@debian.org wrote:

 On Sat, 2009-11-28 at 22:31:29 +, Neil Williams wrote:
  On Sat, 28 Nov 2009 22:25:31 +0100 Guillem Jover wrote:
   On Tue, 2009-11-24 at 10:25:59 +, Neil Williams wrote:
I'll update deb-gview for its next release, although I'll need some
real packages using data.tar.bz2 before I can test it.
   
   You could repack an existing .deb using dpkg-deb and the -Z option.
  
  deb-gview doesn't repack anything. It inspects the contents of the .deb
  directly using libarchive. deb-gview does not use dpkg.
 
 You were asking for a package compressed with something els than gzip
 for testing purposes, I was offering an easy way to get you one from
 any existing package by repacking using dpkg-deb.

Ah, sorry - misunderstood. Thanks.

-- 


Neil Williams
=
http://www.data-freedom.org/
http://www.nosoftwarepatents.com/
http://www.linux.codehelp.co.uk/



pgpBdjNYDTAcQ.pgp
Description: PGP signature


Re: Bits from the FTPMaster meeting

2009-11-24 Thread Neil Williams
On Sat, 21 Nov 2009 03:01:06 +0100
Guillem Jover guil...@debian.org wrote:

 On Sun, 2009-11-15 at 22:22:52 +, Neil Williams wrote:
  On Sun, 15, Nov, 2009 at 02:37:56PM -0500, Joey Hess spoke thus..
   Note that debootstrap does not support data.tar.bz2.
  ar -p ./$pkg data.tar.gz | zcat | tar -xf -
 
 This has been fixed now in debootstrap's svn. I've also sent now a set
 of patches to use dpkg-deb instead of ar when available.
 
  deb-gview is also affected by this but I haven't had any bug reports.
  Fairly easy to fix that in deb-gview though due to the use of
  libarchive.
  
  multistrap will also be affected.

 Well, IMO any program implementing .deb extraction w/o using something
 like --fsys-tarfile, --extract or --control from dpkg-deb (until we
 have the upcoming libdpkg...), should be prepared to handle the format
 described in deb(5), and deserves a bug otherwise. The fact that the
 Debian archive only accepts a subset of the valid .deb format, or that
 we might not want to have bzip2 compressed packages in the base system
 is a matter of policy in Debian, and does not mean others might want to
 do otherwise.

Fixed in multistrap 2.0.4, just arriving in sid. I'll update deb-gview
for its next release, although I'll need some real packages using
data.tar.bz2 before I can test it. 

-- 


Neil Williams
=
http://www.data-freedom.org/
http://www.nosoftwarepatents.com/
http://www.linux.codehelp.co.uk/



pgp0su7Rq4laq.pgp
Description: PGP signature


Re: Bits from the FTPMaster meeting

2009-11-22 Thread Steve Langasek
On Wed, Nov 18, 2009 at 11:14:29PM +0900, Charles Plessy wrote:
 You are member of the technical comittee, which means that I should trust
 your experience.  I want you and this list to understand that I take your
 advice to orphan my packages very seriously.

Well, that's unfortunate, because Manoj isn't speaking for the Technical
Committee.  As a fellow member of the TC, I think Manoj was being
inappropriately inflammatory and insulting with these comments, and I think
by the time he was done purging the rolls of everyone he thought we
shouldn't support as a maintainer, there'd be nothing of Debian left.

That said:

 For the programs I am interested in, I do not share Debian's goal to make
 them run on all existing platforms we support.

 Trust me, it is not only to save my time, but also because I do not want my
 packages to be a burden to the communauty. It is my experience that for
 bioinformatics packages, when a bug is found by the buildd network on an
 unsupported architecture, neither upstream nor the porters show much
 interest for it. I do not mean this as a criticism, since I share this
 point of view that there is better to do than fixing those bugs.

I certainly don't agree with your position here.  We have decided as a
project to support Debian as a general-purpose operating system on an
amazing breadth of different architectures, because we *don't know* what
new and amazing purpose users will put their hardware (or our software) to,
and we want to be in a position to support them whatever the case.

If some manufacturer did announce tomorrow the availability of a new
high-end cluster solution based on ARM or MIPS processors, would we be
poised to take advantage of it?  Are your packages usable by sites that have
made significant investments previously in architectures that are no longer
competitive in the marketplace for new hardware, but that nevertheless meet
the processor demands of their specific computing application?

Packages with porting bugs are not a burden on the community, precisely
because of our collective committment to *fix* these bugs.  As long as
you're not working /against/ users who want to see your packages supported
on their arch, there's no reason to worry overly much if your package has
not yet been ported to that architecture.  It's a bug, but not a critical
one, and being bug-free is an unrealistic standard.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-22 Thread Manoj Srivastava
On Sun, Nov 22 2009, Steve Langasek wrote:

 On Wed, Nov 18, 2009 at 11:14:29PM +0900, Charles Plessy wrote:
 You are member of the technical comittee, which means that I should trust
 your experience.  I want you and this list to understand that I take your
 advice to orphan my packages very seriously.

 Well, that's unfortunate, because Manoj isn't speaking for the
 Technical Committee.

Right.

 As a fellow member of the TC,

Does that mean you are speaking with your TC hat on? If so, that
 is wildly inappropriate, if not, mentioning it here is mostly
 irrelevant, and distracting, unless, of course, you want to appear to
 argue from a position of authority.

 I think Manoj was being inappropriately inflammatory and insulting
 with these comments,

While we are bandying opinions around, let me say that the
 developer with the mind set Works for my pet architecture(s), and
 their near kin Works for me™, and would prefer not to take care of
 bugs on their packages unless their own pet usages are imacted are a
 liability that a project, with the avowed goal of being a universal OS
 and  also of being the best OS, can not possibly afford.

 and I think by the time he was done purging the rolls of everyone he
 thought we shouldn't support as a maintainer, there'd be nothing of
 Debian left.

And I think your judgment on what is acceptable quality has
 become dangerously lax. I can only speculate this might be the
 influence of your day job; but down here he have not fully abrogated
 package quality on a fuller range of architectures, and not yet cast
 away quality of implementation for the ease of use for novices.

It is not enough for people to just not stand in the way of
 other people  trying to fix their packages; Developers should still be
 expected to have an active hand in improving the quality of software
 they maintain, to the best of their abilities.  We are not glorified
 packages with exotic titles like over master of the multiverse; we are
 called Developers for a reason.

mannoj
-- 
Don't make a big deal out of everything; just deal with everything.
Manoj Srivastava sriva...@debian.org http://www.debian.org/~srivasta/  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-20 Thread Fabian Greffrath
Am 15.11.2009 16:15, schrieb Joerg Jaspert:
 multiple outstanding and intrusive patches got merged. We also discussed
 various outstanding topics, a few of which we can report about already,
 a few others where we still have to gather more information. This
 process, either asking our lawyers or various other people, has already
 been started.

May I guess that asking our lawyers also covers the topic around
ffmpeg and related (possibly patent threatened, mostly
multimedia-related) packages? Will you keep us (i.e. pkg-multimedia
maintainers team) informed in that case?


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-20 Thread Guillem Jover
Hi!

On Sun, 2009-11-15 at 22:22:52 +, Neil Williams wrote:
 On Sun, 15, Nov, 2009 at 02:37:56PM -0500, Joey Hess spoke thus..
  Note that debootstrap does not support data.tar.bz2.
 
 debootstrap-1.0.20/functions: extract
 
   progress $p $# EXTRACTPKGS Extracting packages
   packagename=$(echo $pkg | sed 's,^.*/,,;s,_.*$,,')
   info EXTRACTING Extracting %s... $packagename
   ar -p ./$pkg data.tar.gz | zcat | tar -xf -

This has been fixed now in debootstrap's svn. I've also sent now a set
of patches to use dpkg-deb instead of ar when available.

 deb-gview is also affected by this but I haven't had any bug reports.
 Fairly easy to fix that in deb-gview though due to the use of
 libarchive.
 
 multistrap will also be affected.

Well, IMO any program implementing .deb extraction w/o using something
like --fsys-tarfile, --extract or --control from dpkg-deb (until we
have the upcoming libdpkg...), should be prepared to handle the format
described in deb(5), and deserves a bug otherwise. The fact that the
Debian archive only accepts a subset of the valid .deb format, or that
we might not want to have bzip2 compressed packages in the base system
is a matter of policy in Debian, and does not mean others might want to
do otherwise.

regards,
guillem


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Philipp Kern
On 2009-11-19, Luk Claes l...@debian.org wrote:
 This could only work if the built package is needed on the same buildd
 it was built.

That depends on the assumptions.  If the assumption is that the buildds are
trusted (the same as for autosigning) it would also be easy to argue that
setting up some kind of collective protected repository for sharing among
the buildd would not be totally insane.  But then, just implement autosigning,
get rid of that step and reuse autobuilding accepted, or however it's called
nowadays.

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Andreas Tille
On Thu, Nov 19, 2009 at 05:52:21AM +0100, Goswin von Brederlow wrote:
 And then someone comes along and builds a Supercomputer cluster out of
 game consoles.

Well, it *might be* that *someone* does this or that.  But didn't we say
we give priority to our user_s_ (mind the plural).  So for the
theoretical chance that someone does something in the future for a use
we do not know or the other chance mentioned in this thread that
somebody might perhaps do some weird things on hardware which is for
some reason not supported by a certain piece of software we will refuse
to support a couple of users who *really* want to use the program?

This does not sound very sane nor in the interest of the current Debian
users.

Kind regards

  Andreas.

-- 
http://fam-tille.de


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Sune Vuorela
On 2009-11-19, Mike Hommey m...@glandium.org wrote:
 On Wed, Nov 18, 2009 at 11:16:41PM +, Sune Vuorela wrote:
 On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
   I am a bit confused with respect to how buildd autosigning is required
  for this. It makes it sound somehow like it would affect porter binary
 
 Basicalyl, the turnaround time is too long if we have to wait for manual
 buildd signings.
 
 For example, when we upload a new KDE, we usually upload a big chunk of
 source packages (3-5) where package 1 breaks last package.
 
 Currently, we can upload all source packages built for amd64 and i386
 and that way keep kde installable in unstable for more than 95 % of the
 users. 
 
 With 1 package signing per day (which is quite normal), we have 5 days
 where kde by itself is uninstallable on all archs, if the buildds have
 to build all packages by current means.

 Stupid question: If all these packages are interdependent and need to be
 built the same day, why not upload them as a single package ?

It is only most of the time only for new Y releases in X.Y.Z, very
rarely for new Z versions and  never for the debian revision.  And 
it is not small packages we are talking about.

/Sune


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Bernhard R. Link
* Felipe Sateler fsate...@gmail.com [091118 23:39]:
 You apparently fail to see that building the packages on mips uncovers
 bugs that would otherwise be there, but take a longer time to uncover on
 the 'mainstream' platforms.

 This is not generally true. There are are classes of bugs that appear on
 different platforms _due to being different platforms_, not just because
 they were latent bugs waiting to be discovered. I presume that packages
 that require as much efficiency as possible (like Charles is implying in
 his packages) are very likely to implement platform-specific
 hacks/optimizations to run faster. It can be considered bad design, ugly
 and whatnot, but it is irrelevant if nobody ever uses other platforms.

But even in this class lazily disabling build on obscure architectures
can have severe downsizes:

- the platform-specific hacks usually work (or are not an problem of the
  slower architectures, because there are typically no hacks for them).
  What in my experience breaks is the unoptimized fall back code path.
  Having that one tested and working makes it much easier for future porters
  for new architectures you might actually care for.

- platform-specific characteristica are not guaranteed to stay obscure.
  I doubt amd64 would have had much more trouble if the alpha port would
  not have had several years to fix all those little 64 bit errors.
  (It would have been quite easy back then to claim that some strange
   display errors in some little graphical games are not worth the
   efford to fix as noone would buy and run an alpha to play games).

- a big issue outside i386 is alignment. People coming from i386 are
  used to cast pointers without any ill effects. Having the other
  architectures fixing this bugs makes not only i386 faster by avoiding
  uncatched unaligned issuses. But it also future-proofes the code, as
  even in x86 processors the new vectorisation support has alignment
  requirements.

- very often only the more obscure architectures may break but that's
  because some specific things in that architecture break some invalid
  code, often supposed to be some clever optimisation. While there
  are optimisations that are valid C code and still platform specific,
  there are more of them that actually misuse undefined behaviour.
  While those clever tricks work now, they may break with every new
  compiler version because new optimisations in the compiler easily
  break the invalid code. Those are usually some of the most hardest
  things to track down (which also explains why many porting issues
  so often appear to intel-centric maintainers as making no progress,
  as they usually take this time even if they hit a mainline
  architecture. Only that this does not happen so often as it hit an
  less common architecture first and was resolved there already).

Hochachtungsvoll,
Bernhard R. Link
-- 
Never contain programs so few bugs, as when no debugging tools are available!
Niklaus Wirth


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Goswin von Brederlow
Andreas Tille andr...@an3as.eu writes:

 On Thu, Nov 19, 2009 at 05:52:21AM +0100, Goswin von Brederlow wrote:
 And then someone comes along and builds a Supercomputer cluster out of
 game consoles.

 Well, it *might be* that *someone* does this or that.  But didn't we say
 we give priority to our user_s_ (mind the plural).  So for the
 theoretical chance that someone does something in the future for a use
 we do not know or the other chance mentioned in this thread that
 somebody might perhaps do some weird things on hardware which is for
 some reason not supported by a certain piece of software we will refuse
 to support a couple of users who *really* want to use the program?

 This does not sound very sane nor in the interest of the current Debian
 users.

 Kind regards

   Andreas.

Luckily this is not an either or situation.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Goswin von Brederlow
Luk Claes l...@debian.org writes:

 Goswin von Brederlow wrote:
 Sune Vuorela nos...@vuorela.dk writes:
 
 On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
  I am a bit confused with respect to how buildd autosigning is required
 for this. It makes it sound somehow like it would affect porter binary
 Basicalyl, the turnaround time is too long if we have to wait for manual
 buildd signings.

 For example, when we upload a new KDE, we usually upload a big chunk of
 source packages (3-5) where package 1 breaks last package.

 Currently, we can upload all source packages built for amd64 and i386
 and that way keep kde installable in unstable for more than 95 % of the
 users. 

 With 1 package signing per day (which is quite normal), we have 5 days
 where kde by itself is uninstallable on all archs, if the buildds have
 to build all packages by current means.

 With buildd autosigning, we probably only have a day or so on the fast
 archs with kde being uninstallable.

 and I have the impression that we will get quite many bug reports about
 kde being uninstallable. We arleady do that when kde is a part of
 another transition, and if kde is blocking itself on main archs, we will
 only get more.

 So yes, I really hope that 'source only' (or throw away binaries)
 uploads only get implemented when buildd autosigning is in place.

 (KDE doesn't have that many users on e.g. hppa, so the current
 turnaround time isn't that much of a problem outside the main archs)

 /Sune
 
 An alternative way to solve this is to use build packages on the
 buildd without waiting for them to be signed and uploaded. This would
 require some coordination with wanna-build so later KDE packages are
 only given to the buildd that has the earlier ones available.

 This could only work if the built package is needed on the same buildd
 it was built.

What part of require some coordination with wanna-build did you not read?

 The buildd would then build all of KDE and the buildd admin could sign
 it all in one go. That way you have potentially 0 uninstallable time.

 It's very unlikely that the builds for all these packages ends up on the
 same buildd, so in practice that would not work. It could be an
 improvement though.

 Cheers

 Luk

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Goswin von Brederlow
Philipp Kern tr...@philkern.de writes:

 On 2009-11-19, Luk Claes l...@debian.org wrote:
 This could only work if the built package is needed on the same buildd
 it was built.

 That depends on the assumptions.  If the assumption is that the buildds are
 trusted (the same as for autosigning) it would also be easy to argue that
 setting up some kind of collective protected repository for sharing among
 the buildd would not be totally insane.  But then, just implement autosigning,
 get rid of that step and reuse autobuilding accepted, or however it's called
 nowadays.

 Kind regards,
 Philipp Kern

When autosigning came up in the past the argument given against was
that buildd admins do some quality control on the packages. They
notice when the buildds goes haywire and screws up builds. With
autosigning you can easily get 200 totaly broken debs into the archive
because the buildd had a broken debhelper or something.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Philipp Kern
On 2009-11-19, Goswin von Brederlow goswin-...@web.de wrote:
 This could only work if the built package is needed on the same buildd
 it was built.
 What part of require some coordination with wanna-build did you not read?

Well, maybe because wanna-build wouldn't be involved except for an updated
data source for edos-debcheck.  Otherwise wanna-build does not really care
from which repositories the buildds fetch.  Sadly.

Kind regards,
Philipp Kern



-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Luk Claes
Goswin von Brederlow wrote:
 Philipp Kern tr...@philkern.de writes:
 
 On 2009-11-19, Luk Claes l...@debian.org wrote:
 This could only work if the built package is needed on the same buildd
 it was built.
 That depends on the assumptions.  If the assumption is that the buildds are
 trusted (the same as for autosigning) it would also be easy to argue that
 setting up some kind of collective protected repository for sharing among
 the buildd would not be totally insane.  But then, just implement 
 autosigning,
 get rid of that step and reuse autobuilding accepted, or however it's called
 nowadays.

 Kind regards,
 Philipp Kern
 
 When autosigning came up in the past the argument given against was
 that buildd admins do some quality control on the packages. They
 notice when the buildds goes haywire and screws up builds. With
 autosigning you can easily get 200 totaly broken debs into the archive
 because the buildd had a broken debhelper or something.

With autosigning these 200 could as easily get fixed.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-19 Thread Goswin von Brederlow
Luk Claes l...@debian.org writes:

 Goswin von Brederlow wrote:
 Philipp Kern tr...@philkern.de writes:
 
 On 2009-11-19, Luk Claes l...@debian.org wrote:
 This could only work if the built package is needed on the same buildd
 it was built.
 That depends on the assumptions.  If the assumption is that the buildds are
 trusted (the same as for autosigning) it would also be easy to argue that
 setting up some kind of collective protected repository for sharing among
 the buildd would not be totally insane.  But then, just implement 
 autosigning,
 get rid of that step and reuse autobuilding accepted, or however it's called
 nowadays.

 Kind regards,
 Philipp Kern
 
 When autosigning came up in the past the argument given against was
 that buildd admins do some quality control on the packages. They
 notice when the buildds goes haywire and screws up builds. With
 autosigning you can easily get 200 totaly broken debs into the archive
 because the buildd had a broken debhelper or something.

 With autosigning these 200 could as easily get fixed.

 Cheers

 Luk

Only if they can all be binNMUed. And meanwhile users have broken
systems.

But I'm just saying that was the argument in the past. Maybe history
has simply shown that it doesn't happen often enough to be a concern
(or buildd admins miss such screwups too often anyway). So no need to
discuss this.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Jean-Christophe Dubacq
Luk Claes a écrit :
 Charles Plessy wrote:
 Le Tue, Nov 17, 2009 at 08:27:22AM +0100, Yves-Alexis Perez a écrit :
 Unless your proposal is just for unstable but doesn't want to change the
 policy for testing migration?
 Hi,

 Testing migration works the way it should: if a package is never built on an
 architecture, testing migration is not prevented. The problem is that for the
 sake of universality, some programs are built where nobody wants them. Then
 when there is a build failure, nobody wants the ‘hot potato’. Upstream does 
 not
 support non-mainstream arches, the porters are busy porting more central
 packages, the package maintainer has user requests to answer and knows that
 nobody will send him kudos for building the package where it is not used.
 
 The reason we want everything to be built everywhere if possible is not
 universality, but quality.
 
 If your package FTBFS on some architecture, then that is a bug. A bug
 that was already there, it just was not noticed yet. In most cases the
 bug is rather easy to fix, even for non porters as most of the
 architecture specific FTBFS issues are due to wrong assumptions like
 32bit/64bit, little endian/big endian...

Is there somewhere a list of how to fix? Something simple so that
maintainers may do the right things as soon as a package is FTBFS?


-- 
Jean-Christophe Dubacq


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Clint Adams
On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 I don't think it's good to waste buildd time on failing to build packages.
 I also don't think anyone is stopped from setting up a service that
 allows source-only uploads as a go-between.

Do you mean set up an unofficial upload queue that builds a source package,
autosigns the .changes, and uploads it to Debian?


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Stefano Zacchiroli
On Wed, Nov 18, 2009 at 08:47:33AM +0100, Jean-Christophe Dubacq wrote:
  If your package FTBFS on some architecture, then that is a bug. A bug
  that was already there, it just was not noticed yet. In most cases the
  bug is rather easy to fix, even for non porters as most of the
  architecture specific FTBFS issues are due to wrong assumptions like
  32bit/64bit, little endian/big endian...
 Is there somewhere a list of how to fix? Something simple so that
 maintainers may do the right things as soon as a package is FTBFS?

http://wiki.debian.org/qa.debian.org/FTBFS

(linked from all bug reports reported by massive rebuild by Lucas)

The page is maintained in a collaborative manner: if you find a common
patter, please add it there. The page might also need some
re-organization. Any contribution is welcome.

Cheers.

-- 
Stefano Zacchiroli -o- PhD in Computer Science \ PostDoc @ Univ. Paris 7
z...@{upsilon.cc,pps.jussieu.fr,debian.org} -- http://upsilon.cc/zack/
Dietro un grande uomo c'è ..|  .  |. Et ne m'en veux pas si je te tutoie
sempre uno zaino ...| ..: | Je dis tu à tous ceux que j'aime


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-18 Thread Andreas Tille
On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 
 I think one would be surprised how many packages get used on 'exotic'
 architectures. Most users don't specifically search for a piece of
 software, they want to have some specific task done by using a specific
 package. Not providing the package will only mean that the user either
 uses another package or does not get the task done.

Well, I do not think that you can do gene sequencing or number crunching
on current mobile phones.  So there are really programs which are not
needed on all architectures and even if you find a binary package which
claims to do the job it is just useless.  Even if I agree with your
arguing that each program at least theoretically should build on any
architecture (if not it is a bug) in some cases it looks foolish to
provide binary packages just for the sake of it.  This is was Charles
meant when he wrote: We should trust the maintainer if a specific
program is not needed for a certain architecture.

 Slow architectures are dying otherwise there would get new chipsets
 built that are faster IMHO.

There are architectures for different issues.  There are issues which
allways need the fastest available architecture and there are other
needs which are targeting at low power consumption etc.  We should
probably not put a large effort on a theoretical option which is never
used in real live (and I mean a reall *never* not only low chances).

Kind regards

 Andreas. 

-- 
http://fam-tille.de


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Philipp Kern
On 2009-11-18, Andreas Tille andr...@an3as.eu wrote:
 Well, I do not think that you can do gene sequencing or number crunching
 on current mobile phones.  So there are really programs which are not
 needed on all architectures and even if you find a binary package which
 claims to do the job it is just useless.  Even if I agree with your
 arguing that each program at least theoretically should build on any
 architecture (if not it is a bug) in some cases it looks foolish to
 provide binary packages just for the sake of it.  This is was Charles
 meant when he wrote: We should trust the maintainer if a specific
 program is not needed for a certain architecture.

Or the porters (c.f. xorg video or drivers on s390).  But that's what
P-a-s is for, at the moment.  Still it ought to be buildable everywhere,
there might not be clusters of arm yet but I saw offers for clusters of mips.

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Gerfried Fuchs
Hi!

 First of all, thanks for this great roundup. There are just some few
questions that popped up in my mind that I hope haven't asked yet
(wasn't able to check all the responses completely ...). Sorry if there
are duplications, a reference to the answer for easier tracking would be
appreciated, though. :)

* Joerg Jaspert jo...@ganneff.de [2009-11-15 16:15:35 CET]:
 source-only uploads
 ---
 The current winning opinion is to go with the source+throw away
 binaries route.  We are close to being able to achieve this, it is
 simply that it has not yet been enabled.  Before any version of this
 can be enabled, buildd autosigning needs to be implemented in order
 that dak can differentiate buildd uploads vs maintainer uploads.

 I am a bit confused with respect to how buildd autosigning is required
for this. It makes it sound somehow like it would affect porter binary
uploads. Is this the case or am I reading too much into this? What's the
rationale for the requirement of autosiging needs, and would porters
still be able to upload binary-only without having them thrown away
because they aren't signed with a key in the buildd-keyring? It's
unfortunately not too uncommon that some buildds have issues over a
longer period of time, and being able to help while that's the case is
what I consider an important feature for a porter.

 Tracking arch all packages
 --
 #246992 asked us to not delete arch all packages before the
 corresponding (if any) arch any packages are available for all
 architectures.  Example: whenever a new source package for emacs23
 gets uploaded the installation of the metapackage emacs_*_all.deb
 breaks on most architectures until the needed Architecture: any
 packages like emacs23 get built by the buildds. That happens because
 dak removes all arch: all packages but the newest one.

 What exactly is meant with deleting here? From the pool? Or does it
mean that the Packages files will keep all versions of the arch all
packages in them and thus reducing the amount of uninstallable packages?
The later would be greatly help with regular reports of uninstallable
packages that weren't yet built and the old binary package depending on
the old arch package which otherwise wouldn't be available anymore. From
what I understand (and tried) apt does the right thing and chooses the
most recent versions in cases where it doesn't matter anyway.

 Thanks in advance for clearing up these questions, and again, thanks
for your work!

 So long,
Rhonda


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Charles Plessy
Le Wed, Nov 18, 2009 at 12:42:47AM -0600, Manoj Srivastava a écrit :
 
 I beg to differ. This sounds like a maintainer that is not
  providing the support for their package, and needs to  orphan that
  package; not building on some architecture is often a symptom of
  problems elsewhere as well. I am not sure we ought to support
  maintainers that are neglectful of their packages.

You are member of the technical comittee, which means that I should trust your
experience. I want you and this list to understand that I take your advice to
orphan my packages very seriously. For the programs I am interested in, I do
not share Debian's goal to make them run on all existing platforms we support.

Trust me, it is not only to save my time, but also because I do not want my
packages to be a burden to the communauty. It is my experience that for
bioinformatics packages, when a bug is found by the buildd network on an
unsupported architecture, neither upstream nor the porters show much interest
for it. I do not mean this as a criticism, since I share this point of view
that there is better to do than fixing those bugs.

Luk suggested to use an unofficial upload system and indeed I have been
browsing the documentation of Ubuntu's personnal package archive and signed
their code of conduct recently. The only problem is that their PPAs do not
build the packages against Lenny or Sid, but actually it would not be a problem
for many of the users of my packages, because apparently they are Ubuntu users…

I am of course pleased to see my work re-used, but I would be even more pleased
if people would use Debian Med. To attract more users, we need a good release
and good medical packages. I do think that not speding time on porting some
of our bioinformatics packages would help the two sides of the coin.

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Charles Plessy
Le Wed, Nov 18, 2009 at 10:54:18AM +, Philipp Kern a écrit :

 there might not be clusters of arm yet but I saw offers for clusters of mips.

Hi Philipp

I also saw this cluster and got quite curious until I realised that most
programs I package are not parallelised…

The day we are contacted to do some genomics on mips clusters it will be very
exciting, but this should come with real support from the groups interested in,
let's not forget we are just volunteers. Preparing for this event with no
indication whether it could happen or not is risky and in my opinion,
prematurate.

If we mean to attract such users, I do not think that the best strategy would
necessarly be having a pre-existing MIPS support of bioinformatics, which I
think is completely beyond our reach and expertise. I think that what would
matter would be to have a healthy MIPS port on one side, and be the best distro
for bioinformatics on mainstream platforms on the other side. This would be a
solid basis to start a collaboration to become a good bioinformatics distro on
MIPS. Just because we can build packages is not the best indicator: most of them
have no regression tests yet, and a significant number of the build failures
I experienced on my packages happen during such tests…

So in conclusion (like a broken disk), with a simple modification of
dpkg-gencontrol, we can stop building on some architectures some packages which
bring them no added value. For new packages, that seems to be enough. For
existing packages, maintainers who want to opt-out of some architectures would
need to submit a patch against the packages-arch-specific file and sumbit a
bunch of dak commands to the release file. This could be consolidated in
batches and I can help for this, so that the work load is minimum, compared to
the gain for everybody. 

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Manoj Srivastava
On Wed, Nov 18 2009, Charles Plessy wrote:

 Le Wed, Nov 18, 2009 at 12:42:47AM -0600, Manoj Srivastava a écrit :
 
 I beg to differ. This sounds like a maintainer that is not
  providing the support for their package, and needs to  orphan that
  package; not building on some architecture is often a symptom of
  problems elsewhere as well. I am not sure we ought to support
  maintainers that are neglectful of their packages.

 You are member of the technical comittee, which means that I should
 trust your experience. I want you and this list to understand that I
 take your advice to orphan my packages very seriously.

While I am flattered, I don't think you should pay special
 attention to my words based on who I am. That is the flip side of
 arguing to the man; arguing by authority. You should pay attention to
 my argument only if it makes sense.

 For the programs I am interested in, I do not share Debian's goal to
 make them run on all existing platforms we support.

I don't think that that is the rationale for making packages
 build everywhere; if it were, we would not have P-a-s. The rationale is
 that making packages portable unmasks bugs that are present everywhere,
 but not yet triggered,

Now, there are of course packages that do not make sense to
 build on all architectures, or to not build on specific arches. My
 SELinux related packages are an example -- they do not make sense to
 have on the kfreebsd or the HURD. Which is why we have mechanisms to
 exclude packages from architectures -- and by default, if a package has
 never built on an architecture, it is not a testing migration blocker. 

The answer is to utilize these exception mechanisms.

 Trust me, it is not only to save my time, but also because I do not
 want my packages to be a burden to the communauty. It is my experience
 that for bioinformatics packages, when a bug is found by the buildd
 network on an unsupported architecture, neither upstream nor the
 porters show much interest for it. I do not mean this as a criticism,
 since I share this point of view that there is better to do than
 fixing those bugs.

Right. But it is not for upstream or the porters alone: this is
 what we, as Debian developers, do.  We are not just glorified
 packagers; we are supposed to be Developers, we  take an active role
 in improving and fixing our packages. Anything less does not do justice
 to the project's goal of creating the BEst OS ever.

 I am of course pleased to see my work re-used, but I would be even
 more pleased if people would use Debian Med. To attract more users, we
 need a good release and good medical packages. I do think that not
 speding time on porting some of our bioinformatics packages would help
 the two sides of the coin.

Firstly, if it requires that much porting, it might point to a
 defect in design, which should be fixed. Secondly, if there is a
 legitimate reason (and of course there are legitimate reasons to not
 build stuff on some arches) -- then talk to your fellow Debian
 developers, and get an entry added to the P-a-s. It is not hard.

manoj
-- 
...and scantily clad females, of course.  Who cares if it's below zero
outside (By Linus Torvalds)
Manoj Srivastava sriva...@debian.org http://www.debian.org/~srivasta/  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Manoj Srivastava
On Wed, Nov 18 2009, Clint Adams wrote:

 On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 I don't think it's good to waste buildd time on failing to build packages.
 I also don't think anyone is stopped from setting up a service that
 allows source-only uploads as a go-between.

 Do you mean set up an unofficial upload queue that builds a source
 package, autosigns the .changes, and uploads it to Debian?

If such a system is set into play, I promise to help test it by
 funneling my uploads through it.

manoj
-- 
It would be nice to be sure of anything the way some people are of
everything.
Manoj Srivastava sriva...@debian.org http://www.debian.org/~srivasta/  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Mark Brown
On Wed, Nov 18, 2009 at 11:40:52PM +0900, Charles Plessy wrote:

 If we mean to attract such users, I do not think that the best strategy would
 necessarly be having a pre-existing MIPS support of bioinformatics, which I
 think is completely beyond our reach and expertise. I think that what would
 matter would be to have a healthy MIPS port on one side, and be the best 
 distro
 for bioinformatics on mainstream platforms on the other side. This would be a
 solid basis to start a collaboration to become a good bioinformatics distro on
 MIPS. Just because we can build packages is not the best indicator: most of 
 them
 have no regression tests yet, and a significant number of the build failures
 I experienced on my packages happen during such tests???

It's a bit worrying that the software requires noticable porting effort
in the first place - often that's a sign of general fragility which will
also manifiest itself on supported arches sooner or later.

 So in conclusion (like a broken disk), with a simple modification of
 dpkg-gencontrol, we can stop building on some architectures some packages 
 which
 bring them no added value. For new packages, that seems to be enough. For
 existing packages, maintainers who want to opt-out of some architectures would
 need to submit a patch against the packages-arch-specific file and sumbit a
 bunch of dak commands to the release file. This could be consolidated in
 batches and I can help for this, so that the work load is minimum, compared to
 the gain for everybody. 

The flip side of this is that it's just inviting maintainers to decide
they can't be bothered with porting effort and leaving ports as second
class citizens.


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Luk Claes
Andreas Tille wrote:
 On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 I think one would be surprised how many packages get used on 'exotic'
 architectures. Most users don't specifically search for a piece of
 software, they want to have some specific task done by using a specific
 package. Not providing the package will only mean that the user either
 uses another package or does not get the task done.
 
 Well, I do not think that you can do gene sequencing or number crunching
 on current mobile phones.  So there are really programs which are not
 needed on all architectures and even if you find a binary package which
 claims to do the job it is just useless.  Even if I agree with your
 arguing that each program at least theoretically should build on any
 architecture (if not it is a bug) in some cases it looks foolish to
 provide binary packages just for the sake of it.  This is was Charles
 meant when he wrote: We should trust the maintainer if a specific
 program is not needed for a certain architecture.
 
 Slow architectures are dying otherwise there would get new chipsets
 built that are faster IMHO.
 
 There are architectures for different issues.  There are issues which
 allways need the fastest available architecture and there are other
 needs which are targeting at low power consumption etc.  We should
 probably not put a large effort on a theoretical option which is never
 used in real live (and I mean a reall *never* not only low chances).

That is what I meant. There are users of openoffice.org on armel and
mipsel, so it's not at all theoretical even if one would think
differently from a first look.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Luk Claes
Clint Adams wrote:
 On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 I don't think it's good to waste buildd time on failing to build packages.
 I also don't think anyone is stopped from setting up a service that
 allows source-only uploads as a go-between.
 
 Do you mean set up an unofficial upload queue that builds a source package,
 autosigns the .changes, and uploads it to Debian?

I was more thinking of an unofficial upload queue that builds a source
package and let the maintainer sign to do the upload to Debian instead
of autosigning. So it would still be the maintainer's responsability to
check the resulting package before uploading.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Luk Claes
Gerfried Fuchs wrote:
   Hi!
 
  First of all, thanks for this great roundup. There are just some few
 questions that popped up in my mind that I hope haven't asked yet
 (wasn't able to check all the responses completely ...). Sorry if there
 are duplications, a reference to the answer for easier tracking would be
 appreciated, though. :)
 
 * Joerg Jaspert jo...@ganneff.de [2009-11-15 16:15:35 CET]:
 source-only uploads
 ---
 The current winning opinion is to go with the source+throw away
 binaries route.  We are close to being able to achieve this, it is
 simply that it has not yet been enabled.  Before any version of this
 can be enabled, buildd autosigning needs to be implemented in order
 that dak can differentiate buildd uploads vs maintainer uploads.
 
  I am a bit confused with respect to how buildd autosigning is required
 for this. It makes it sound somehow like it would affect porter binary
 uploads. Is this the case or am I reading too much into this? What's the
 rationale for the requirement of autosiging needs, and would porters
 still be able to upload binary-only without having them thrown away
 because they aren't signed with a key in the buildd-keyring? It's
 unfortunately not too uncommon that some buildds have issues over a
 longer period of time, and being able to help while that's the case is
 what I consider an important feature for a porter.

The rationale is that the turnaround time would get smaller. Currently
the built package waits for the buildd admin to manually sign. A smaller
turnaround time would at least affect transitions, binNMUs and testing
migration in general.

AFAIK binary packages would only be thrown away for sourceful uploads.

There still needs to be implemented a solution regarding the
Architecture: all packages and actual testing and preparation by the
wb-team to get actual autosigning on any buildd btw.

 Tracking arch all packages
 --
 #246992 asked us to not delete arch all packages before the
 corresponding (if any) arch any packages are available for all
 architectures.  Example: whenever a new source package for emacs23
 gets uploaded the installation of the metapackage emacs_*_all.deb
 breaks on most architectures until the needed Architecture: any
 packages like emacs23 get built by the buildds. That happens because
 dak removes all arch: all packages but the newest one.
 
  What exactly is meant with deleting here? From the pool? Or does it
 mean that the Packages files will keep all versions of the arch all
 packages in them and thus reducing the amount of uninstallable packages?
 The later would be greatly help with regular reports of uninstallable
 packages that weren't yet built and the old binary package depending on
 the old arch package which otherwise wouldn't be available anymore. From
 what I understand (and tried) apt does the right thing and chooses the
 most recent versions in cases where it doesn't matter anyway.

The solution consists of keeping the Architecture: all packages in the
Packages files as long as the corresponding Architecture: any packages
are not installed in the archive if any (actual implementation is a bit
more involved due to some corner cases).

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Luk Claes
Charles Plessy wrote:
 Le Wed, Nov 18, 2009 at 10:54:18AM +, Philipp Kern a écrit :
 
 there might not be clusters of arm yet but I saw offers for clusters of mips.
 
 Hi Philipp
 
 I also saw this cluster and got quite curious until I realised that most
 programs I package are not parallelised…

A cluster is not only about parallel programs, but also about running
single threaded applications in parallel...

 If we mean to attract such users, I do not think that the best strategy would
 necessarly be having a pre-existing MIPS support of bioinformatics, which I
 think is completely beyond our reach and expertise. I think that what would
 matter would be to have a healthy MIPS port on one side, and be the best 
 distro
 for bioinformatics on mainstream platforms on the other side. This would be a
 solid basis to start a collaboration to become a good bioinformatics distro on
 MIPS. Just because we can build packages is not the best indicator: most of 
 them
 have no regression tests yet, and a significant number of the build failures
 I experienced on my packages happen during such tests…

You apparently fail to see that building the packages on mips uncovers
bugs that would otherwise be there, but take a longer time to uncover on
the 'mainstream' platforms.

 So in conclusion (like a broken disk), with a simple modification of
 dpkg-gencontrol, we can stop building on some architectures some packages 
 which
 bring them no added value. For new packages, that seems to be enough. For
 existing packages, maintainers who want to opt-out of some architectures would
 need to submit a patch against the packages-arch-specific file and sumbit a
 bunch of dak commands to the release file. This could be consolidated in
 batches and I can help for this, so that the work load is minimum, compared to
 the gain for everybody. 

As IMHO there is added value in building a package on all release
architectures, there is no reason to change dpkg-gencontrol at all.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Andreas Tille
On Wed, Nov 18, 2009 at 08:18:57PM +0100, Luk Claes wrote:
  There are architectures for different issues.  There are issues which
  allways need the fastest available architecture and there are other
  needs which are targeting at low power consumption etc.  We should
  probably not put a large effort on a theoretical option which is never
  used in real live (and I mean a reall *never* not only low chances).
 
 That is what I meant. There are users of openoffice.org on armel and
 mipsel, so it's not at all theoretical even if one would think
 differently from a first look.

That's actually *not* what I mean.  I agree that there might be a use
for openoffice.org on any architecture.  This general use applications
are exactly what I was *not* speaking about.  But please let this settle
because the find a bug reason might be strong enough to make our
arguing void.

Kind regards

 Andreas.

-- 
http://fam-tille.de


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Felipe Sateler

Luk Claes wrote:


You apparently fail to see that building the packages on mips uncovers
bugs that would otherwise be there, but take a longer time to uncover on
the 'mainstream' platforms.


This is not generally true. There are are classes of bugs that appear on
different platforms _due to being different platforms_, not just because
they were latent bugs waiting to be discovered. I presume that packages
that require as much efficiency as possible (like Charles is implying in
his packages) are very likely to implement platform-specific
hacks/optimizations to run faster. It can be considered bad design, ugly
and whatnot, but it is irrelevant if nobody ever uses other platforms.

Saludos,
Felipe Sateler


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Philipp Kern
On 2009-11-18, Felipe Sateler fsate...@gmail.com wrote:
 This is not generally true. There are are classes of bugs that appear on
 different platforms _due to being different platforms_, not just because
 they were latent bugs waiting to be discovered. I presume that packages
 that require as much efficiency as possible (like Charles is implying in
 his packages) are very likely to implement platform-specific
 hacks/optimizations to run faster. It can be considered bad design, ugly
 and whatnot, but it is irrelevant if nobody ever uses other platforms.

However that's one *exact* use-case for P-a-s.  If you do such optimizations.
(C.f. zsnes with its x86 assembly as a hard example.)

Mostly, though, I'd guess that it's written in a higher-level language without
resorting to architecture-dependent assembly.  And this code should, barring
alignment issues, also run on other platforms.

(Ok, there are weird cases of stack growing upwards, that's a special case
I agree upon.  However in most programs you do not need to deal with this
fact.)

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Sune Vuorela
On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
  I am a bit confused with respect to how buildd autosigning is required
 for this. It makes it sound somehow like it would affect porter binary

Basicalyl, the turnaround time is too long if we have to wait for manual
buildd signings.

For example, when we upload a new KDE, we usually upload a big chunk of
source packages (3-5) where package 1 breaks last package.

Currently, we can upload all source packages built for amd64 and i386
and that way keep kde installable in unstable for more than 95 % of the
users. 

With 1 package signing per day (which is quite normal), we have 5 days
where kde by itself is uninstallable on all archs, if the buildds have
to build all packages by current means.

With buildd autosigning, we probably only have a day or so on the fast
archs with kde being uninstallable.

and I have the impression that we will get quite many bug reports about
kde being uninstallable. We arleady do that when kde is a part of
another transition, and if kde is blocking itself on main archs, we will
only get more.

So yes, I really hope that 'source only' (or throw away binaries)
uploads only get implemented when buildd autosigning is in place.

(KDE doesn't have that many users on e.g. hppa, so the current
turnaround time isn't that much of a problem outside the main archs)

/Sune


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Charles Plessy
Le Wed, Nov 18, 2009 at 02:49:46PM +, Mark Brown a écrit :
 
 The flip side of this is that it's just inviting maintainers to decide
 they can't be bothered with porting effort and leaving ports as second
 class citizens.

It seems that the trend this year is to not trust the maintainers for anything…

How about the porters responsability towards the project ? For instance, hppa
is blocking the testing migration of a couple of my packages, and probably the
packages of many other maintainers as well. Why would it be my duty to help
people running Debian on machines that are not used in my profession, and for
which I have no qualification at all? I do not want to prevent people having
fun with Debian on this arch, so wouldn't the best solution to never build my
package on their arch in the first place? It would reduce the number of issues
to solve in both groups, Debian Med and the hppa porters, which like every
other group in Debian severely lack manpower.

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Manoj Srivastava
On Wed, Nov 18 2009, Charles Plessy wrote:

 Le Wed, Nov 18, 2009 at 02:49:46PM +, Mark Brown a écrit :
 
 The flip side of this is that it's just inviting maintainers to
 decide they can't be bothered with porting effort and leaving ports
 as second class citizens.

 It seems that the trend this year is to not trust the maintainers for
 anything…

It would seem that your remark below somewhat validate 

 How about the porters responsability towards the project ? For

They are also jointly responsible for trying to port stuff to
 their machines. We are, like, you know, in it together? Which is why
 the project is a plurality?

 instance, hppa is blocking the testing migration of a couple of my
 packages, and probably the packages of many other maintainers as
 well. Why would it be my duty to help people running Debian on
 machines that are not used in my profession, and for which I have no
 qualification at all? I do not want to prevent people having fun with

To try and make Debian better, rather than just be narrowly
 focused on your little fiefdom? 

The package maintianer is the resident expert Debian has for the
 package. If there are problems building it, the first line of
 defense is the package maintainer. I mean, dude, they are _your_
 packages that are not building on a supported architecture. If the
 problem is in the tool chain, the porters should take lead, but that is
 the lower probability scenario. Chances are the fix lies in your domain
 of expertise, namely, the package source.

 Debian on this arch, so wouldn't the best solution to never build my
 package on their arch in the first place?

No. The best solution is to fix the buggy package. Or deem it
 too buggy to be in Debian, of course.


 It would reduce the number of issues to solve in both groups, Debian
 Med and the hppa porters, which like every other group in Debian
 severely lack manpower.

If some package is so straining the resources of the teams, by
 being so fragile as to require a huge amount of effort on  a couple of
 architectures with no legitimate reason for being included in P-a-s,
 then the consideration should be to fix the package, or drop it, before
 relegating users of hppa to second class citizens -- as long as the
 project still supports hppa.

manoj
-- 
Mind if I smoke? I don't care if you burst into flames and die!
Manoj Srivastava sriva...@debian.org http://www.debian.org/~srivasta/  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Kumar Appaiah
(Note: I am not a porter, so please correct anything wrong I say
below)

On Thu, Nov 19, 2009 at 08:29:53AM +0900, Charles Plessy wrote:
 How about the porters responsability towards the project ? For instance, hppa
 is blocking the testing migration of a couple of my packages, and probably the
 packages of many other maintainers as well. Why would it be my duty to help
 people running Debian on machines that are not used in my profession, and for
 which I have no qualification at all?

Because of DFSG point 4: Our priorities are our users and free
software.

Believe it or not, I've never _imagined_ that some software which are
in Debian are actually used on non-x86ish hardware. However, people
want to do weird things on weird non-x86ish machines, and they like
Debian because they Debian enables them to natively use the very same
tools used by ordinary PC users on, say, their embedded machines for
low power solutions for various automated tasks. They love it.

 I do not want to prevent people having fun with Debian on this arch,
 so wouldn't the best solution to never build my package on their
 arch in the first place? It would reduce the number of issues to
 solve in both groups, Debian Med and the hppa porters, which like
 every other group in Debian severely lack manpower.

Your complaint makes sense. But such policies are in place because
Debian wants to allow for all its users to have all goodness,
irrespective of computer type. While it may seem that your packages
are (unfairly) being blocked from migration due to one particular
architecture's lag, removing a package from that architecture would be
looked upon as a regression uniformly, irrespective of whether such
packages are used on that architecture or not. You never now on which
day someone would decide to try something fancy with your package on a
fancy architecture; we don't want to disappoint him/her by saying
Hey, sorry, it was too painful for us to keep providing the package
on your architecture, so we just removed it; (thereby, in my opinion,
not fulfilling DFSG 4).

Of course, if the architecture fails the release qualification, then
it's a different matter.
Kumar
-- 
Why are there always boycotts?  Shouldn't there be girlcotts too?
-- argon on #Linux


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-18 Thread Goswin von Brederlow
Andreas Tille andr...@an3as.eu writes:

 On Wed, Nov 18, 2009 at 07:41:51AM +0100, Luk Claes wrote:
 
 I think one would be surprised how many packages get used on 'exotic'
 architectures. Most users don't specifically search for a piece of
 software, they want to have some specific task done by using a specific
 package. Not providing the package will only mean that the user either
 uses another package or does not get the task done.

 Well, I do not think that you can do gene sequencing or number crunching
 on current mobile phones.  So there are really programs which are not
 needed on all architectures and even if you find a binary package which
 claims to do the job it is just useless.  Even if I agree with your
 arguing that each program at least theoretically should build on any
 architecture (if not it is a bug) in some cases it looks foolish to
 provide binary packages just for the sake of it.  This is was Charles
 meant when he wrote: We should trust the maintainer if a specific
 program is not needed for a certain architecture.

And then someone comes along and builds a Supercomputer cluster out of
game consoles.

With the way energy consumption becomes important I would not be
surprised to see an arm supercomputer cluster next.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Goswin von Brederlow
Felipe Sateler fsate...@gmail.com writes:

 Luk Claes wrote:

 You apparently fail to see that building the packages on mips uncovers
 bugs that would otherwise be there, but take a longer time to uncover on
 the 'mainstream' platforms.

 This is not generally true. There are are classes of bugs that appear on
 different platforms _due to being different platforms_, not just because

Unless you talk about hand optimized code the first 3 or 4 debian
archs will already have the bug. They might not all show it directly
but it will be there. The remaining archs then just make it more
likely a bug shows early.

 they were latent bugs waiting to be discovered. I presume that packages
 that require as much efficiency as possible (like Charles is implying in
 his packages) are very likely to implement platform-specific
 hacks/optimizations to run faster. It can be considered bad design, ugly
 and whatnot, but it is irrelevant if nobody ever uses other platforms.

 Saludos,
 Felipe Sateler

My take on this is that the code should first be written in a high
level form, e.g. generic C code that runs everywhere, and then only
parts that profiling show being worth it should be optimized.

The generic C code has 3 functions:

1) It is usualy much easier to understand and verify.

2) It can be used to compare results against the optimized code.

3) It can be used on archs where optimized code is too much work or
out of your expertise.


So all supporting all the different archs really costs is keeping the
generic C code current. And you should be using it to verify changes
in the optimized code on a continuing basis. Something that helps keep
the quality of the optimized code strong too.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Goswin von Brederlow
Sune Vuorela nos...@vuorela.dk writes:

 On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
  I am a bit confused with respect to how buildd autosigning is required
 for this. It makes it sound somehow like it would affect porter binary

 Basicalyl, the turnaround time is too long if we have to wait for manual
 buildd signings.

 For example, when we upload a new KDE, we usually upload a big chunk of
 source packages (3-5) where package 1 breaks last package.

 Currently, we can upload all source packages built for amd64 and i386
 and that way keep kde installable in unstable for more than 95 % of the
 users. 

 With 1 package signing per day (which is quite normal), we have 5 days
 where kde by itself is uninstallable on all archs, if the buildds have
 to build all packages by current means.

 With buildd autosigning, we probably only have a day or so on the fast
 archs with kde being uninstallable.

 and I have the impression that we will get quite many bug reports about
 kde being uninstallable. We arleady do that when kde is a part of
 another transition, and if kde is blocking itself on main archs, we will
 only get more.

 So yes, I really hope that 'source only' (or throw away binaries)
 uploads only get implemented when buildd autosigning is in place.

 (KDE doesn't have that many users on e.g. hppa, so the current
 turnaround time isn't that much of a problem outside the main archs)

 /Sune

An alternative way to solve this is to use build packages on the
buildd without waiting for them to be signed and uploaded. This would
require some coordination with wanna-build so later KDE packages are
only given to the buildd that has the earlier ones available.

The buildd would then build all of KDE and the buildd admin could sign
it all in one go. That way you have potentially 0 uninstallable time.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Carlo Segre

On Mon, 16 Nov 2009, Steffen Joeris wrote:


On Mon, 16 Nov 2009 02:04:28 pm Carlo Segre wrote:

On Sun, 15 Nov 2009, Joerg Jaspert wrote:

The current winning opinion is to go with the source+throw away
binaries route.  We are close to being able to achieve this, it is
simply that it has not yet been enabled.  Before any version of this
can be enabled, buildd autosigning needs to be implemented in order
that dak can differentiate buildd uploads vs maintainer uploads.


It may be necessary to also move the building of contrib packages to the
unofficial non-free buildd network.  As it stands any contrib package
which has a non-free Build-Depends is not guaranteed to build on all
architectures since not all the buildd systems include the non-free
archives.  Up to now it has been possible to do binary uploads to work
around this and get as many architectures in the archive as possible to
build manually.  When this new option is enabled, it will no longer be
possible.

As I understood it, it is still possible for DDs to do binary-only uploads (as
allowed per GR). This throwing away of the binary package is only for the
initial source+binary upload.
(In an ideal world, there should be no need for DDs to do binary-only uploads
by hand, but in reality it has to happen every now and then, at least for
security).


I suppose that is correct.  It still makes sense to me that the contrib 
packages be built on the non-free autobuilders for practical reasons I 
have mentioned above.


Cheers,

Carlo


--
Carlo U. Segre -- Professor of Physics
Associate Dean for Graduate Admissions, Graduate College
Illinois Institute of Technology
Voice: 312.567.3498Fax: 312.567.3494
se...@iit.edu   http://www.iit.edu/~segre   se...@debian.org


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Luk Claes
Goswin von Brederlow wrote:
 Sune Vuorela nos...@vuorela.dk writes:
 
 On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
  I am a bit confused with respect to how buildd autosigning is required
 for this. It makes it sound somehow like it would affect porter binary
 Basicalyl, the turnaround time is too long if we have to wait for manual
 buildd signings.

 For example, when we upload a new KDE, we usually upload a big chunk of
 source packages (3-5) where package 1 breaks last package.

 Currently, we can upload all source packages built for amd64 and i386
 and that way keep kde installable in unstable for more than 95 % of the
 users. 

 With 1 package signing per day (which is quite normal), we have 5 days
 where kde by itself is uninstallable on all archs, if the buildds have
 to build all packages by current means.

 With buildd autosigning, we probably only have a day or so on the fast
 archs with kde being uninstallable.

 and I have the impression that we will get quite many bug reports about
 kde being uninstallable. We arleady do that when kde is a part of
 another transition, and if kde is blocking itself on main archs, we will
 only get more.

 So yes, I really hope that 'source only' (or throw away binaries)
 uploads only get implemented when buildd autosigning is in place.

 (KDE doesn't have that many users on e.g. hppa, so the current
 turnaround time isn't that much of a problem outside the main archs)

 /Sune
 
 An alternative way to solve this is to use build packages on the
 buildd without waiting for them to be signed and uploaded. This would
 require some coordination with wanna-build so later KDE packages are
 only given to the buildd that has the earlier ones available.

This could only work if the built package is needed on the same buildd
it was built.

 The buildd would then build all of KDE and the buildd admin could sign
 it all in one go. That way you have potentially 0 uninstallable time.

It's very unlikely that the builds for all these packages ends up on the
same buildd, so in practice that would not work. It could be an
improvement though.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-18 Thread Mike Hommey
On Wed, Nov 18, 2009 at 11:16:41PM +, Sune Vuorela wrote:
 On 2009-11-18, Gerfried Fuchs rho...@deb.at wrote:
   I am a bit confused with respect to how buildd autosigning is required
  for this. It makes it sound somehow like it would affect porter binary
 
 Basicalyl, the turnaround time is too long if we have to wait for manual
 buildd signings.
 
 For example, when we upload a new KDE, we usually upload a big chunk of
 source packages (3-5) where package 1 breaks last package.
 
 Currently, we can upload all source packages built for amd64 and i386
 and that way keep kde installable in unstable for more than 95 % of the
 users. 
 
 With 1 package signing per day (which is quite normal), we have 5 days
 where kde by itself is uninstallable on all archs, if the buildds have
 to build all packages by current means.

Stupid question: If all these packages are interdependent and need to be
built the same day, why not upload them as a single package ?

It's even easier now, with the new source formats.

Mike


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-17 Thread Raphael Hertzog
On Tue, 17 Nov 2009, Charles Plessy wrote:
 To save everybody's time, I proposed earlier in this month's discussion to not
 report the build failures in our bug tracking system unless there is an 
 interest
 from the porters or from the package maintainers to make the package available
 in the affected architecture(s).

This goes against my interpretation of what it means to try to be an
universal operating system. 

I know multiple architecture support can sometimes be a pain, but we can't
really afford to drop part of it without consequences. It will have
consequences in what migrates to testing and thus on what a stable release
looks like.

Cheers,
-- 
Raphaël Hertzog


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-17 Thread Charles Plessy
Le Tue, Nov 17, 2009 at 08:27:22AM +0100, Yves-Alexis Perez a écrit :
 
 Unless your proposal is just for unstable but doesn't want to change the
 policy for testing migration?

Hi,

Testing migration works the way it should: if a package is never built on an
architecture, testing migration is not prevented. The problem is that for the
sake of universality, some programs are built where nobody wants them. Then
when there is a build failure, nobody wants the ‘hot potato’. Upstream does not
support non-mainstream arches, the porters are busy porting more central
packages, the package maintainer has user requests to answer and knows that
nobody will send him kudos for building the package where it is not used.

Currently, if I put ‘Arch: i386 amd64’, in the debian/control file, it makes
difficulties to the people who want to build the package for their own purposes
on a different architectures, because dpkg-gencontrol will fail. If instead of 
this
it would only throw a warning like ‘Unsupported architecture, use at your own 
risk.’,
then the maintainer could use this field to control the list of architectures
he is willing to support. Official buildds could then ignore the unsupported 
ones.

I would be more than happy to have user feedback asking me to support more
architectures on a case-by-case basis. But my point is that for most of my
packages those users simply do not exist. On the other hand, as Raphaël noted,
building everyghing everywhere is not so easy, so my conclusion is that for the
universality of building we spend energy that could be used to improve our
universality of using.

If there is a general agreement that maintainers should be trusted and allowed
to restrict the set of build architectures on their packages, I can have a look
at the dpkg-gencontrol source code and propose a patch…

And to return on the topic of this thread, if we decrease the number of
packages built by default on some slow architectures, we release some pressure
from the build network, which makes it more fault-tolerant and removes a reason
given to disallow source-only uploads.

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-17 Thread Goswin von Brederlow
Charles Plessy ple...@debian.org writes:

 Le Tue, Nov 17, 2009 at 08:27:22AM +0100, Yves-Alexis Perez a écrit :
 
 Unless your proposal is just for unstable but doesn't want to change the
 policy for testing migration?

 Hi,

 Testing migration works the way it should: if a package is never built on an
 architecture, testing migration is not prevented. The problem is that for the
 sake of universality, some programs are built where nobody wants them. Then
 when there is a build failure, nobody wants the ‘hot potato’. Upstream 
 does not
 support non-mainstream arches, the porters are busy porting more central
 packages, the package maintainer has user requests to answer and knows that
 nobody will send him kudos for building the package where it is not used.

 Currently, if I put ‘Arch: i386 amd64’, in the debian/control file, it 
 makes
 difficulties to the people who want to build the package for their own 
 purposes
 on a different architectures, because dpkg-gencontrol will fail. If instead 
 of this
 it would only throw a warning like ‘Unsupported architecture, use at your 
 own risk.’,
 then the maintainer could use this field to control the list of architectures
 he is willing to support. Official buildds could then ignore the unsupported 
 ones.

Buildds don't care about the Arch field anyway. They use P-A-S and NFU
only.

And if your package did build before then testing will have the old
package. I don't think the DAK handles the case an architecture was
dropped without manually removing the old package from testing.

So you really have not gained anything at all.

 I would be more than happy to have user feedback asking me to support more
 architectures on a case-by-case basis. But my point is that for most of my
 packages those users simply do not exist. On the other hand, as Raphaël 
 noted,
 building everyghing everywhere is not so easy, so my conclusion is that for 
 the
 universality of building we spend energy that could be used to improve our
 universality of using.

The problem is that when you do need the package as a user it will not
be there. You probably won't even know it should be there as apt-cache
search and such won't show it.

 If there is a general agreement that maintainers should be trusted and allowed
 to restrict the set of build architectures on their packages, I can have a 
 look
 at the dpkg-gencontrol source code and propose a patch…

General agreement has always been to build everything so if/when a
user needs it it is available.

The only good reason to restrict the architecture is when the package
can not possibly be build or maintained for that arch. And since you
are talking about cases where the previous uploads all did build and
worked that is clearly not the case.

 And to return on the topic of this thread, if we decrease the number of
 packages built by default on some slow architectures, we release some pressure
 from the build network, which makes it more fault-tolerant and removes a 
 reason
 given to disallow source-only uploads.

 Have a nice day,

The good reason for source-only uploads is to improve the quality of
packages for amd64/i386 because uploaders often do a lousy job,
i.e. too often they don't build in a clean and current sid chroot.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-17 Thread Luk Claes
Charles Plessy wrote:
 Le Tue, Nov 17, 2009 at 08:27:22AM +0100, Yves-Alexis Perez a écrit :
 Unless your proposal is just for unstable but doesn't want to change the
 policy for testing migration?
 
 Hi,
 
 Testing migration works the way it should: if a package is never built on an
 architecture, testing migration is not prevented. The problem is that for the
 sake of universality, some programs are built where nobody wants them. Then
 when there is a build failure, nobody wants the ‘hot potato’. Upstream does 
 not
 support non-mainstream arches, the porters are busy porting more central
 packages, the package maintainer has user requests to answer and knows that
 nobody will send him kudos for building the package where it is not used.

The reason we want everything to be built everywhere if possible is not
universality, but quality.

If your package FTBFS on some architecture, then that is a bug. A bug
that was already there, it just was not noticed yet. In most cases the
bug is rather easy to fix, even for non porters as most of the
architecture specific FTBFS issues are due to wrong assumptions like
32bit/64bit, little endian/big endian...

 Currently, if I put ‘Arch: i386 amd64’, in the debian/control file, it makes
 difficulties to the people who want to build the package for their own 
 purposes
 on a different architectures, because dpkg-gencontrol will fail. If instead 
 of this
 it would only throw a warning like ‘Unsupported architecture, use at your own 
 risk.’,
 then the maintainer could use this field to control the list of architectures
 he is willing to support. Official buildds could then ignore the unsupported 
 ones.

In the general case supporting all release architectures should not be
much harder than supporting only a small subset of them. Regarding the
buildds, they normally try to build everything which has more than
Architecture: all packages...

 I would be more than happy to have user feedback asking me to support more
 architectures on a case-by-case basis. But my point is that for most of my
 packages those users simply do not exist. On the other hand, as Raphaël noted,
 building everyghing everywhere is not so easy, so my conclusion is that for 
 the
 universality of building we spend energy that could be used to improve our
 universality of using.

I think one would be surprised how many packages get used on 'exotic'
architectures. Most users don't specifically search for a piece of
software, they want to have some specific task done by using a specific
package. Not providing the package will only mean that the user either
uses another package or does not get the task done. It certainly would
not mean that a user knows before a release that they will use a
specific package nor that they would actually give feedback beforehand.

 If there is a general agreement that maintainers should be trusted and allowed
 to restrict the set of build architectures on their packages, I can have a 
 look
 at the dpkg-gencontrol source code and propose a patch…

If there is no good technical reason, it will get built everywhere and
by extension supported everywhere.

 And to return on the topic of this thread, if we decrease the number of
 packages built by default on some slow architectures, we release some pressure
 from the build network, which makes it more fault-tolerant and removes a 
 reason
 given to disallow source-only uploads.

Slow architectures are dying otherwise there would get new chipsets
built that are faster IMHO. So I want to look at it the other way
around: fast architectures are more fault tolerant than slow ones, so if
slow ones catch too much faults they are out for the release.

I don't think it's good to waste buildd time on failing to build packages.
I also don't think anyone is stopped from setting up a service that
allows source-only uploads as a go-between.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-17 Thread Manoj Srivastava
On Tue, Nov 17 2009, Charles Plessy wrote:

 Le Tue, Nov 17, 2009 at 08:27:22AM +0100, Yves-Alexis Perez a écrit :
 
 Unless your proposal is just for unstable but doesn't want to change the
 policy for testing migration?

 Hi,

 Testing migration works the way it should: if a package is never built
 on an architecture, testing migration is not prevented. The problem is
 that for the sake of universality, some programs are built where
 nobody wants them. Then when there is a build failure, nobody wants
 the ‘hot potato’. Upstream does not support non-mainstream arches, the
 porters are busy porting more central packages, the package maintainer
 has user requests to answer and knows that nobody will send him kudos
 for building the package where it is not used.

I beg to differ. This sounds like a maintainer that is not
 providing the support for their package, and needs to  orphan that
 package; not building on some architecture is often a symptom of
 problems elsewhere as well. I am not sure we ought to support
 maintainers that are neglectful of their packages.

manoj

-- 
There are some things worth dying for. Kirk, Errand of Mercy, stardate
3201.7
Manoj Srivastava sriva...@debian.org http://www.debian.org/~srivasta/  
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Simon Huggins
On Sun, Nov 15, 2009 at 04:15:35PM +0100, Joerg Jaspert wrote:
 source-only uploads
 ---
 After some discussion about this, there are two opinions within the
 ftp-team about this matter.  Given that other distros experience has
 shown that allowing source only uploads results in a huge loss of
 quality checks and an increased load on the buildds from packages
 FTBFSing everywhere, some members of the team believe that source+binary
 uploads should happen as currently,  but that the maintainer built
 binaries should be rebuilt by the buildds (i.e. be thrown away at accept
 time).  Other members of the team think that we should allow source-only
 uploads and that if some people keep uploading packages which FTBFS
 everywhere (showing a lack of basic testing), this should be dealt with
 by the project in other ways which are out of the scope of the ftp-team.

What's the difference between these options?

If you throw away the binaries, a DD can upload a binary package with a
sole binary that prints out banana and a source package that builds the
right thing presumably.  Are there any checks to prevent that?

I'm trying to work out if you get what you think you do from building
but throwing away that makes it better than entirely source-only.

Simon.

-- 
oOoOo   If you need to find a good pub in London follow peopleoOoOo
 oOoOo   wearing Debian shirts. It works, it really does.  --oOoOo
  oOoOo Alan Cox oOoOo
  htag.pl 0.0.24 ::: http://www.earth.li/~huggie/


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-16 Thread Philipp Kern
On 2009-11-16, Simon Huggins hug...@earth.li wrote:
 If you throw away the binaries, a DD can upload a binary package with a
 sole binary that prints out banana and a source package that builds the
 right thing presumably.  Are there any checks to prevent that?

 I'm trying to work out if you get what you think you do from building
 but throwing away that makes it better than entirely source-only.

You can run lintian on the resulting binaries, which you can't on source-only
uploads.  (Well, you can only check the source package.)  Now, if that stub
binary you upload is free from errors ftp-masters reject upon, then you can
still work around that.

And I didn't bother to check now if they really rely on binary checks yet,
however I'd at least assume something like binary-package-is-empty.  ;-)

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Goswin von Brederlow
Philipp Kern tr...@philkern.de writes:

 On 2009-11-16, Simon Huggins hug...@earth.li wrote:
 If you throw away the binaries, a DD can upload a binary package with a
 sole binary that prints out banana and a source package that builds the
 right thing presumably.  Are there any checks to prevent that?

 I'm trying to work out if you get what you think you do from building
 but throwing away that makes it better than entirely source-only.

 You can run lintian on the resulting binaries, which you can't on source-only
 uploads.  (Well, you can only check the source package.)  Now, if that stub
 binary you upload is free from errors ftp-masters reject upon, then you can
 still work around that.

 And I didn't bother to check now if they really rely on binary checks yet,
 however I'd at least assume something like binary-package-is-empty.  ;-)

 Kind regards,
 Philipp Kern

Those could (and should) easily be checked for the binary-only uploads
from buildds. And if a maintainer keeps uploading sources that fail
the lintian checks on the buildd uploads that could be delt with
whatever other method the initial mail hinted at.

In my mind the question is: Will maintainer upload so many bad source
packages that the overhead of uploading binaries and throwing them
away makes sense? Something that can not be answered without some hard
data.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Steve Langasek
On Mon, Nov 16, 2009 at 08:38:15AM +0100, Goswin von Brederlow wrote:
  I'm not asserting that this problem is *not* significant, I simply don't
  know - and am interested in knowing if anyone has more data on this beyond
  some four-year-old anecdotes.  Certainly, Debian with its wider range of
  ports is more likely to run into problems because of this than Ubuntu, and
  so will need to be fairly cautious.

 I don't think the number of ports will have any meaning here. If the
 package is too broken to build/work on the maintainers architecture it
 will most likely be broken on all archs. On the other hand if it works
 on the maintainers architecture then testing or no testing makes no
 difference to the other ports.

 It seems to me the only port that MIGHT suffer quality issues is the
 one the maintainer uses. Meaning i386 or amd64 usualy and Ubuntu
 already has experience there.

On Mon, Nov 16, 2009 at 06:24:42PM +1100, Robert Collins wrote:
 On Sun, 2009-11-15 at 19:29 -0600, Steve Langasek wrote:

  I'm not asserting that this problem is *not* significant, I simply don't
  know - and am interested in knowing if anyone has more data on this beyond
  some four-year-old anecdotes.  Certainly, Debian with its wider range of
  ports is more likely to run into problems because of this than Ubuntu, and
  so will need to be fairly cautious.

 I'd have assumed that ports will have no effect on this: Debian only
 uploads one binary arch (from the maintainer) anyway :- only builds on
 that arch will be directly affected except in the case of a build
 failure that the maintainer could have caught locally.

I thought the nature of the problem was clear, but to be explicit:
requiring binary uploads ensures that the package has been build-tested
*somewhere* prior to upload, and avoids clogging up the buildds with
preventable failures (some of which will happen only at the end of the
build, which may tie up the buildd for quite a long time).  The larger
number of ports compared to Ubuntu has the effect that the ports with the
lowest capacity are /more likely/ to run into problems as a result of such
waste, and as Debian only advances as fast as the slowest supported port,
this holds up the entire distribution.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-16 Thread Simon Richter
Hi,

On Mon, Nov 16, 2009 at 09:38:38AM -0600, Steve Langasek wrote:

 requiring binary uploads ensures that the package has been build-tested
 *somewhere* prior to upload, and avoids clogging up the buildds with
 preventable failures (some of which will happen only at the end of the
 build, which may tie up the buildd for quite a long time).

Sorting the build queue by number of architectures that have already
built the package would avoid this problem nicely.

   Simon


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Luk Claes
Simon Richter wrote:
 Hi,
 
 On Mon, Nov 16, 2009 at 09:38:38AM -0600, Steve Langasek wrote:
 
 requiring binary uploads ensures that the package has been build-tested
 *somewhere* prior to upload, and avoids clogging up the buildds with
 preventable failures (some of which will happen only at the end of the
 build, which may tie up the buildd for quite a long time).
 
 Sorting the build queue by number of architectures that have already
 built the package would avoid this problem nicely.

It would not avoid the problem, but would make it smaller though.

I guess patches against wanna-build are welcomed.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Luk Claes
Goswin von Brederlow wrote:
 Philipp Kern tr...@philkern.de writes:
 
 On 2009-11-16, Simon Huggins hug...@earth.li wrote:
 If you throw away the binaries, a DD can upload a binary package with a
 sole binary that prints out banana and a source package that builds the
 right thing presumably.  Are there any checks to prevent that?

 I'm trying to work out if you get what you think you do from building
 but throwing away that makes it better than entirely source-only.
 You can run lintian on the resulting binaries, which you can't on source-only
 uploads.  (Well, you can only check the source package.)  Now, if that stub
 binary you upload is free from errors ftp-masters reject upon, then you can
 still work around that.

 And I didn't bother to check now if they really rely on binary checks yet,
 however I'd at least assume something like binary-package-is-empty.  ;-)

 Kind regards,
 Philipp Kern
 
 Those could (and should) easily be checked for the binary-only uploads
 from buildds. And if a maintainer keeps uploading sources that fail
 the lintian checks on the buildd uploads that could be delt with
 whatever other method the initial mail hinted at.
 
 In my mind the question is: Will maintainer upload so many bad source
 packages that the overhead of uploading binaries and throwing them
 away makes sense? Something that can not be answered without some hard
 data.

Noone is stopping anyone of preparing a service that would accept source
only uploads as a go between to find out at least some numbers and solve
the problem some are having with bandwidth or unreliability of the
existing solutions.

Cheers

Luk


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Kevin Mark
On Sun, Nov 15, 2009 at 07:44:18PM +0100, Sandro Tosi wrote:
snip
 
 While I like the source + trow away solution, I'd also like to ask
 you to please consider some methods to allow the throw away step on
 the developer machine, for example having dput/dupload not upload the
 .debs (so .changes still has them listed, but are not actually
 uploaded) and still accepting the upload.
 
 There are (still) people with slow internet connections or with very
 huge packages, with several binary packages, that would benefit a lot
 with this option.
 
 Additionally, things like NMUs or QA uploads (so where the tarball is
 not, generally, changed), would reduce to a .dsc + .diff.gz +
 .changes file set, that's a lot faster to upload.
 
 Thanks for considering,

I assume there is some check done on ftp-master involving the binary for some
reasons. Would it be possible to do this check on the developers machine and
simplely send a 'it passed the check' message instead? Thus saving the 
bandwidth?
-- 
|  .''`.  == Debian GNU/Linux == | http://kevix.myopenid.com  |
| : :' : The Universal OS| mysite.verizon.net/kevin.mark/ |
| `. `'   http://www.debian.org/ | http://counter.li.org [#238656]|
|___`-Unless I ask to be CCd, assume I am subscribed _|


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Sandro Tosi
On Tue, Nov 17, 2009 at 00:36, Kevin Mark kevin.m...@verizon.net wrote:
 On Sun, Nov 15, 2009 at 07:44:18PM +0100, Sandro Tosi wrote:
 snip

 While I like the source + trow away solution, I'd also like to ask
 you to please consider some methods to allow the throw away step on
 the developer machine, for example having dput/dupload not upload the
 .debs (so .changes still has them listed, but are not actually
 uploaded) and still accepting the upload.

 There are (still) people with slow internet connections or with very
 huge packages, with several binary packages, that would benefit a lot
 with this option.

 Additionally, things like NMUs or QA uploads (so where the tarball is
 not, generally, changed), would reduce to a .dsc + .diff.gz +
 .changes file set, that's a lot faster to upload.

 Thanks for considering,

 I assume there is some check done on ftp-master involving the binary for some
 reasons. Would it be possible to do this check on the developers machine and
 simplely send a 'it passed the check' message instead? Thus saving the 
 bandwidth?

Or run them on the buildd-generated packages (ah, btw, are the
lintian-autoreject check run on buildd binaries?). Since we are not
enabling source-only uploads because there's no trust in developers to
do proper uploads, then we can't believe those information to not be
forged to fake a proper upload. Also, on developers machine there
can be a different lintian version than the one on ftp queue machine,
thus generating misalignments etc etc.

Regards,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Robert Collins
On Mon, 2009-11-16 at 09:38 -0600, Steve Langasek wrote:


 I thought the nature of the problem was clear, but to be explicit:
 requiring binary uploads ensures that the package has been build-tested
 *somewhere* prior to upload, and avoids clogging up the buildds with
 preventable failures (some of which will happen only at the end of the
 build, which may tie up the buildd for quite a long time).  The larger
 number of ports compared to Ubuntu has the effect that the ports with the
 lowest capacity are /more likely/ to run into problems as a result of such
 waste, and as Debian only advances as fast as the slowest supported port,
 this holds up the entire distribution.

Well, I was assuming a couple of things I guess.

Firstly, that if there are two successive uploads of a package, P and
P', where P is badly damaged, and P' is uploaded before an overloaded
architecture starts to build P, then P is never attempted on that
overloaded architecture.

Secondly, that we can also cut all the builds for a package which fails
on its first architecture.

-Rob


signature.asc
Description: This is a digitally signed message part


Re: Bits from the FTPMaster meeting

2009-11-16 Thread Goswin von Brederlow
Steve Langasek vor...@debian.org writes:

 On Mon, Nov 16, 2009 at 08:38:15AM +0100, Goswin von Brederlow wrote:
  I'm not asserting that this problem is *not* significant, I simply don't
  know - and am interested in knowing if anyone has more data on this beyond
  some four-year-old anecdotes.  Certainly, Debian with its wider range of
  ports is more likely to run into problems because of this than Ubuntu, and
  so will need to be fairly cautious.

 I don't think the number of ports will have any meaning here. If the
 package is too broken to build/work on the maintainers architecture it
 will most likely be broken on all archs. On the other hand if it works
 on the maintainers architecture then testing or no testing makes no
 difference to the other ports.

 It seems to me the only port that MIGHT suffer quality issues is the
 one the maintainer uses. Meaning i386 or amd64 usualy and Ubuntu
 already has experience there.

 On Mon, Nov 16, 2009 at 06:24:42PM +1100, Robert Collins wrote:
 On Sun, 2009-11-15 at 19:29 -0600, Steve Langasek wrote:

  I'm not asserting that this problem is *not* significant, I simply don't
  know - and am interested in knowing if anyone has more data on this beyond
  some four-year-old anecdotes.  Certainly, Debian with its wider range of
  ports is more likely to run into problems because of this than Ubuntu, and
  so will need to be fairly cautious.

 I'd have assumed that ports will have no effect on this: Debian only
 uploads one binary arch (from the maintainer) anyway :- only builds on
 that arch will be directly affected except in the case of a build
 failure that the maintainer could have caught locally.

 I thought the nature of the problem was clear, but to be explicit:
 requiring binary uploads ensures that the package has been build-tested
 *somewhere* prior to upload, and avoids clogging up the buildds with
 preventable failures (some of which will happen only at the end of the
 build, which may tie up the buildd for quite a long time).  The larger
 number of ports compared to Ubuntu has the effect that the ports with the
 lowest capacity are /more likely/ to run into problems as a result of such
 waste, and as Debian only advances as fast as the slowest supported port,
 this holds up the entire distribution.

Which assumes the slower ports are neigther idle nor backloged but
have just the right amount of load that they will actually build the
buggy source before the maintainer uploads the next version.

The only thing that shows is that the current static build order of
packages (A is always build before B no matter how new A and how old B
is) is the only problem here. Factor in the time a source was uploaded
and there won't be starvation of sources by buggy sources.

And if buggy sources are still a problem after that implement a karma
systems. Every time a source fails the package gets a malus, every
time it succeeds it gets a bonus and you factor that into the
priority. That way stupid maintainers will get their packages less
likely build and will have to wait for idle times.

If ports still have problems with buggy sources after that make
sources wait for some fast architecture to build them successfully
first before trying it. Block sources that already failed on
i386/amd64 completly. Or are you telling me the amd64 is too slow to
finish building before a backloged arm buildd even tries?

All of that assumes this even is a problem in the first place, but
lets say it is. I don't think that requiring binary uploads ensures
reliably that sources will build. Experience has shown some really
broken uploads and tons of fluke breakages anyway. Those maintainers
that do care will still test their packages dutifully. Those that
already don't will keep uploading packages build against e.g. stable
or experimental even more. After all, the debs are thrown away. Why
bother rebuilding a source in a clean chroot if it did build in the
normal system? Or build debs with -nc during developement and source
once for the release and merge the changes files. Or put some dummy
deb into the changes file to trick the DAK. or or or. There are so
many ways a lazy maintainer can get around the check that it might
just end up only hurting the good guys.


Well, what I'm saying is that I'm not convinced this measure will have
any big effect either way. The good guys will still do good uploads,
the bad guys will still manage do do bad ones and unavoidable screwups
will still happen. Lets concentrate on the important point. i386/amd64
will get clean binary packages build on a clean buildd. I think that
will improve quality much more than anything else.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Charles Plessy
Le Mon, Nov 16, 2009 at 09:38:38AM -0600, Steve Langasek a écrit :

 Debian only advances as fast as the slowest supported port

That is the key observation.

To save everybody's time, I proposed earlier in this month's discussion to not
report the build failures in our bug tracking system unless there is an interest
from the porters or from the package maintainers to make the package available
in the affected architecture(s).

We could go even one step further: if our build network is close to saturation
for the slowest architectures, it would make much sense to avoid building leaf
packages, or even whole branches of our dependancy tree on some architectures
where they have no users. ‘No user’ is easy to define. If:

 - the package maintainer does not expect users,
 - the porters are busy porting core packages of higher priorities
   (required, important, standard …),
 - upstream does not support the architeture,
 - no Debian user ever showed interest for the package in this architecture,

this package can then be safely ignored, which will save a lot of time to
everybody, letting them to focus on problems for which people do care a lot.

This would reduce the load of the build network, and make occasional build
failures sustainable (not to mention that some of the packages that take a long
time to build are good candidate for being ignored).

Although it sounds a bit sillogical, if for some architectures we do not build
the packages that have no users, no user will complain. So why not ?

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-16 Thread Yves-Alexis Perez
On mar., 2009-11-17 at 14:07 +0900, Charles Plessy wrote:
 Although it sounds a bit sillogical, if for some architectures we do not build
 the packages that have no users, no user will complain. So why not ? 

Well, I'm not really sure we can expect our user to follow unstable and
each and every FTBFS on their arch. I think quite a lot of people just
happily use stable releases, and have little to no interaction with
build system and BTS (because, you know, things do work fine, mostly, so
you don't have anything to report). They day you want to upgrade to the
next version you read the release notes and you see a huge list of
package which aren't available on one or another arch, but it's too
late.

Unless your proposal is just for unstable but doesn't want to change the
policy for testing migration?

Cheers,
-- 
Yves-Alexis


signature.asc
Description: This is a digitally signed message part


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Frans Pop
On Sunday 15 November 2009, Joerg Jaspert wrote:
 dpkg v3 source format, compression
 --
 As many already noticed, our archive now additionally supports 3.0
 (quilt) and 3.0 (native) source package formats.  You can use either
 gzip as usual or bzip2 for the compression within the binary packages -
 and now also for the source files. We do not support lzma as a
 compressor, as the format is already dead again. After squeeze we will
 probably add support for its successor, xz.

Is there a policy for the use of bzip2?

As discussed earlier bzip2 is *much* slower that gzip and really hurts on 
slower arches and systems, so I'd suggest that - especially for binary 
packages - gzip should remain the default for all normal cases and bzip2 
should reserved for cases where there is a really significant size 
decrease.

 source-only uploads
 ---
 The current winning opinion is to go with the source+throw away
 binaries route.  We are close to being able to achieve this, it is
 simply that it has not yet been enabled.

I fully agree with that, but like to request that exceptions are allowed
in special cases.

Main use case I have is kernel udebs where it is sometimes necessary to 
upload udebs to unstable built from a kernel version in testing. Our own 
build methods support that, but it would get undone by a rebuild.

Our build method also ensure that all uploads are based on the same kernel 
version, something that's much harder to ensure when it's left to the 
buildds.

 The extra source case
 -
 This issue is the one traditionally known as the linux-modules-extra
 problem but also exists for some compiler packages and in the past
 existed for things such as apache2-mpm-itk and so is a more general
 problem.  It exists where a package needs to use source from another
 package in order to build.

And kernel udebs.

 We intend to fix this by introducing a way of packages declaring that
 they were Built-Using a certain source package,version and then tracking
 that to ensure that the sources are kept around properly.

Nice.

Cheers,
FJP


signature.asc
Description: This is a digitally signed message part.


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Andreas Metzler
Frans Pop elen...@planet.nl wrote:
 On Sunday 15 November 2009, Joerg Jaspert wrote:
 dpkg v3 source format, compression
[...]
 Is there a policy for the use of bzip2?

 As discussed earlier bzip2 is *much* slower that gzip and really hurts on 
 slower arches and systems, so I'd suggest that - especially for binary 
 packages - gzip should remain the default for all normal cases and bzip2 
 should reserved for cases where there is a really significant size 
 decrease.

... or where upstream only offers tar.bz2.

FWIW dpkg does the smart thing by default. It uses gzip (both
for the debian packages and and debian.tar) but searches for both
foo_42.orig.tar.bz2 and .gz. Explicitely passing an option is required
to get bz2 compression for binary packages and/or debian.tar.

cu and- happliy using v3 for gnutls -reas


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Sandro Tosi
Hello Joerg,
thanks for the updates.

On Sun, Nov 15, 2009 at 16:15, Joerg Jaspert jo...@ganneff.de wrote:
...
 source-only uploads
 ---
 After some discussion about this, there are two opinions within the
 ftp-team about this matter.  Given that other distros experience has
 shown that allowing source only uploads results in a huge loss of
 quality checks and an increased load on the buildds from packages
 FTBFSing everywhere, some members of the team believe that source+binary
 uploads should happen as currently,  but that the maintainer built
 binaries should be rebuilt by the buildds (i.e. be thrown away at accept
 time).  Other members of the team think that we should allow source-only
 uploads and that if some people keep uploading packages which FTBFS
 everywhere (showing a lack of basic testing), this should be dealt with
 by the project in other ways which are out of the scope of the ftp-team.

 The current winning opinion is to go with the source+throw away
 binaries route.  We are close to being able to achieve this, it is
 simply that it has not yet been enabled.  Before any version of this
 can be enabled, buildd autosigning needs to be implemented in order
 that dak can differentiate buildd uploads vs maintainer uploads.

 Provisions have been made in dak for things such as bootstrapping a
 new architecture where binary uploads from porters may be necessary
 in order to get going.

While I like the source + trow away solution, I'd also like to ask
you to please consider some methods to allow the throw away step on
the developer machine, for example having dput/dupload not upload the
.debs (so .changes still has them listed, but are not actually
uploaded) and still accepting the upload.

There are (still) people with slow internet connections or with very
huge packages, with several binary packages, that would benefit a lot
with this option.

Additionally, things like NMUs or QA uploads (so where the tarball is
not, generally, changed), would reduce to a .dsc + .diff.gz +
.changes file set, that's a lot faster to upload.

Thanks for considering,
-- 
Sandro Tosi (aka morph, morpheus, matrixhasu)
My website: http://matrixhasu.altervista.org/
Me at Debian: http://wiki.debian.org/SandroTosi


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Joey Hess
Andreas Metzler wrote:
 FWIW dpkg does the smart thing by default. It uses gzip (both
 for the debian packages and and debian.tar) but searches for both
 foo_42.orig.tar.bz2 and .gz. Explicitely passing an option is required
 to get bz2 compression for binary packages and/or debian.tar.

Note that debootstrap does not support data.tar.bz2.

 cu and- happliy using v3 for gnutls -reas

Please avoid doing so for libtasn1-3.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Goswin von Brederlow
Sandro Tosi mo...@debian.org writes:

 Hello Joerg,
 thanks for the updates.

 On Sun, Nov 15, 2009 at 16:15, Joerg Jaspert jo...@ganneff.de wrote:
 ...
 source-only uploads
 ---
 After some discussion about this, there are two opinions within the
 ftp-team about this matter.  Given that other distros experience has
 shown that allowing source only uploads results in a huge loss of
 quality checks and an increased load on the buildds from packages
 FTBFSing everywhere, some members of the team believe that source+binary
 uploads should happen as currently,  but that the maintainer built
 binaries should be rebuilt by the buildds (i.e. be thrown away at accept
 time).  Other members of the team think that we should allow source-only
 uploads and that if some people keep uploading packages which FTBFS
 everywhere (showing a lack of basic testing), this should be dealt with
 by the project in other ways which are out of the scope of the ftp-team.

 The current winning opinion is to go with the source+throw away
 binaries route.  We are close to being able to achieve this, it is
 simply that it has not yet been enabled.  Before any version of this
 can be enabled, buildd autosigning needs to be implemented in order
 that dak can differentiate buildd uploads vs maintainer uploads.

 Provisions have been made in dak for things such as bootstrapping a
 new architecture where binary uploads from porters may be necessary
 in order to get going.

 While I like the source + trow away solution, I'd also like to ask
 you to please consider some methods to allow the throw away step on
 the developer machine, for example having dput/dupload not upload the
 .debs (so .changes still has them listed, but are not actually
 uploaded) and still accepting the upload.

 There are (still) people with slow internet connections or with very
 huge packages, with several binary packages, that would benefit a lot
 with this option.

 Additionally, things like NMUs or QA uploads (so where the tarball is
 not, generally, changed), would reduce to a .dsc + .diff.gz +
 .changes file set, that's a lot faster to upload.

 Thanks for considering,

What about Architecture: all? Will they be kept? Is one buildd special
and builds them?

If Architecture: all is kept then maybe allow source+all uploads?

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Philipp Kern
On 2009-11-15, Goswin von Brederlow goswin-...@web.de wrote:
 If Architecture: all is kept then maybe allow source+all uploads?

Those are already possible.  If they're allowed is another question, though.

Kind regards,
Philipp Kern


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Mark Hymers
On Sun, 15, Nov, 2009 at 02:37:56PM -0500, Joey Hess spoke thus..
 Andreas Metzler wrote:
  FWIW dpkg does the smart thing by default. It uses gzip (both
  for the debian packages and and debian.tar) but searches for both
  foo_42.orig.tar.bz2 and .gz. Explicitely passing an option is required
  to get bz2 compression for binary packages and/or debian.tar.
 
 Note that debootstrap does not support data.tar.bz2.
 
  cu and- happliy using v3 for gnutls -reas
 
 Please avoid doing so for libtasn1-3.

I think there's some confusion here between source and binary formats.
The announcement was referring to bzip2 when used as part of a source
upload.  As far as I can tell from looking in the git logs, dak has
supported data.tar.bz2 since 2005, so I'm surprised that this has never
been an issue before if debootstrap can't handle it.

Mark

-- 
Mark Hymers mhy at debian dot org

Everyone is entitled to be stupid but some abuse the privilege.
 Unknown


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Joey Hess
Joey Hess wrote:
  cu and- happliy using v3 for gnutls -reas
 
 Please avoid doing so for libtasn1-3.

Please ignore above; misread.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Frans Pop
Mark Hymers wrote:
 I think there's some confusion here between source and binary formats.
 The announcement was referring to bzip2 when used as part of a source
 upload.  As far as I can tell from looking in the git logs, dak has
 supported data.tar.bz2 since 2005, so I'm surprised that this has never
 been an issue before if debootstrap can't handle it.

Then can you (or someone else) please explain what exactly is meant by the 
reference to bzip2 for binary packages in the following quote from the 
original mail:

! You can use either gzip as usual or bzip2 for the compression within
! the binary packages - and now also for the source files.

TIA,
FJP


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Stephen Gran
This one time, at band camp, Frans Pop said:
 Mark Hymers wrote:
  I think there's some confusion here between source and binary formats.
  The announcement was referring to bzip2 when used as part of a source
  upload.  As far as I can tell from looking in the git logs, dak has
  supported data.tar.bz2 since 2005, so I'm surprised that this has never
  been an issue before if debootstrap can't handle it.
 
 Then can you (or someone else) please explain what exactly is meant by the 
 reference to bzip2 for binary packages in the following quote from the 
 original mail:
 
 ! You can use either gzip as usual or bzip2 for the compression within
 ! the binary packages - and now also for the source files.

I suspect it might have been better worded as:

In addition to being able to use gzip or bzip2 for compression within
the binary packages, you can now also use them for source files.

That was my reading of it, at any rate.

Cheers,
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :sg...@debian.org |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Joerg Jaspert

 Then can you (or someone else) please explain what exactly is meant by the 
 reference to bzip2 for binary packages in the following quote from the 
 original mail:
 ! You can use either gzip as usual or bzip2 for the compression within
 ! the binary packages - and now also for the source files.

That you now can not only compress the binary contents with bz2, but
also the source.

-- 
bye, Joerg
Ganneff kde und tastatur? passt doch nicht mit dem nutzerprofil
windepp zusammen :)


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Mike Hommey
On Sun, Nov 15, 2009 at 04:15:35PM +0100, Joerg Jaspert wrote:
 Tracking arch all packages
 --
 #246992 asked us to not delete arch all packages before the
 corresponding (if any) arch any packages are available for all
 architectures.  Example: whenever a new source package for emacs23
 gets uploaded the installation of the metapackage emacs_*_all.deb
 breaks on most architectures until the needed Architecture: any
 packages like emacs23 get built by the buildds. That happens because
 dak removes all arch: all packages but the newest one.
 
 While this behaviour is easily documented and one can easily devise a
 fix (just keep the arch all until the any is there, stupid), the
 actual implementation of it contains several nasty corner cases
 which is why it took so long to fix.
 
 Thankfully Torsten Werner took on this task during the meeting [2] and
 we are now in a position where we can merge his work.  We'll email
 before turning on this feature so that people can watch out of any
 strange cases which could cause problems.

Hurray !

Thanks, a lot.

Mike


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Neil Williams
On Sun, 15 Nov 2009 19:53:02 +
Mark Hymers m...@debian.org wrote:

 On Sun, 15, Nov, 2009 at 02:37:56PM -0500, Joey Hess spoke thus..
  Andreas Metzler wrote:
   FWIW dpkg does the smart thing by default. It uses gzip (both
   for the debian packages and and debian.tar) but searches for both
   foo_42.orig.tar.bz2 and .gz. Explicitely passing an option is required
   to get bz2 compression for binary packages and/or debian.tar.
  
  Note that debootstrap does not support data.tar.bz2.
 
 I think there's some confusion here between source and binary formats.
 The announcement was referring to bzip2 when used as part of a source
 upload.  As far as I can tell from looking in the git logs, dak has
 supported data.tar.bz2 since 2005, so I'm surprised that this has never
 been an issue before if debootstrap can't handle it.

debootstrap-1.0.20/functions: extract

progress $p $# EXTRACTPKGS Extracting packages
packagename=$(echo $pkg | sed 's,^.*/,,;s,_.*$,,')
info EXTRACTING Extracting %s... $packagename
ar -p ./$pkg data.tar.gz | zcat | tar -xf -

So it appears to be a case of debootstrap might not have come across
any packages that use data.tar.bz2, yet - the range of packages that
debootstrap has to handle is quite limited.

I don't suppose there's a list anywhere, so anyone know which packages
are already using data.tar.bz2 ?

deb-gview is also affected by this but I haven't had any bug reports.
Fairly easy to fix that in deb-gview though due to the use of
libarchive.

multistrap will also be affected.

In comparison, using foo_1.0.0.orig.tar.bz2 isn't much of a problem.
I've been adding .tar.bz2 upstream to most of my projects but
the .tar.gz still gets 95% of the downloads, despite being ~100k
smaller.

If someone using data.tar.bz2 in their packages can let me know which
packages use it (and why), it would help.

-- 


Neil Williams
=
http://www.data-freedom.org/
http://www.nosoftwarepatents.com/
http://www.linux.codehelp.co.uk/



pgpV0dC2PPzM0.pgp
Description: PGP signature


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Joerg Jaspert
On 11935 March 1977, Joerg Jaspert wrote:

 NEW/Byhand
 --
 Due to the massive changes in the archive, NEW (and also Byhand) had to
 be disabled. Certain assumptions made by the processing tools no longer
 applied.  The last week was used to work on this issue and we think this
 will be fixed today, so NEW processing will return to its normal speed
 soon.

And I just committed and pushed the last change for it, so it is
actually back alive. We already accepted a few packages to test, but
should you notice weird things happening please notice us.

(Yes, we know, the detailed overview of a package isnt yet visible)

-- 
bye, Joerg
Free beer is something that I am never going to drink and free speech is
something that people are never going to be allowed to. ;)


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Charles Plessy
Le Sun, Nov 15, 2009 at 04:15:35PM +0100, Joerg Jaspert a écrit :
 
 source-only uploads

Hi Jörg and all the FTP team,

fist of all, I want to say a big thank you for all this work. I have given
harsh comments for part of it, but I am really grateful for most.

I am curious on how the rebuild of the architecture-independant packages
happens. Could we have a bit more details on the buildd(s) and arch(es)
involved, the contact points in case of breakage, etc…

Have a nice day,

-- 
Charles Plessy
Tsurumi, Kanagawa, Japan


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Joerg Jaspert
On 11936 March 1977, Charles Plessy wrote:

 source-only uploads
 I am curious on how the rebuild of the architecture-independant packages
 happens.

That depends on what we get out with in the end.
Probably all buildds can build arch:all (so the buildd maintainer wants it),
and there will be a new control field Build-Architecture for the
arch:all ones, for the few cases where it is mandantory to build on one
specific architecture.

 Could we have a bit more details on the buildd(s) and arch(es)
 involved, the contact points in case of breakage, etc…

No, if you reread the second paragraph of this section you will see why.

When we go and activate it it we will list more details. (It is not yet)
Though there is no change for contact points or the like, it stays like
it is. (And is actually nothing we are involved in, its buildd folks).

-- 
bye, Joerg
 dvdbackup (0.1.1-7) unstable; urgency=medium
 . 
   * The wiki-wacky-oaxtepec release


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Carlo Segre

On Sun, 15 Nov 2009, Joerg Jaspert wrote:


The current winning opinion is to go with the source+throw away
binaries route.  We are close to being able to achieve this, it is
simply that it has not yet been enabled.  Before any version of this
can be enabled, buildd autosigning needs to be implemented in order
that dak can differentiate buildd uploads vs maintainer uploads.



It may be necessary to also move the building of contrib packages to the 
unofficial non-free buildd network.  As it stands any contrib package 
which has a non-free Build-Depends is not guaranteed to build on all 
architectures since not all the buildd systems include the non-free 
archives.  Up to now it has been possible to do binary uploads to work 
around this and get as many architectures in the archive as possible to 
build manually.  When this new option is enabled, it will no longer be 
possible.


Cheers,

Carlo

--
Carlo U. Segre -- Professor of Physics
Associate Dean for Graduate Admissions, Graduate College
Illinois Institute of Technology
Voice: 312.567.3498Fax: 312.567.3494
se...@iit.edu   http://www.iit.edu/~segre   se...@debian.org


--
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Steffen Joeris
On Mon, 16 Nov 2009 02:04:28 pm Carlo Segre wrote:
 On Sun, 15 Nov 2009, Joerg Jaspert wrote:
  The current winning opinion is to go with the source+throw away
  binaries route.  We are close to being able to achieve this, it is
  simply that it has not yet been enabled.  Before any version of this
  can be enabled, buildd autosigning needs to be implemented in order
  that dak can differentiate buildd uploads vs maintainer uploads.
 
 It may be necessary to also move the building of contrib packages to the
 unofficial non-free buildd network.  As it stands any contrib package
 which has a non-free Build-Depends is not guaranteed to build on all
 architectures since not all the buildd systems include the non-free
 archives.  Up to now it has been possible to do binary uploads to work
 around this and get as many architectures in the archive as possible to
 build manually.  When this new option is enabled, it will no longer be
 possible.
As I understood it, it is still possible for DDs to do binary-only uploads (as 
allowed per GR). This throwing away of the binary package is only for the 
initial source+binary upload.
(In an ideal world, there should be no need for DDs to do binary-only uploads 
by hand, but in reality it has to happen every now and then, at least for 
security).

Cheers
Steffen


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Steve Langasek
Hello,

On Sun, Nov 15, 2009 at 04:15:35PM +0100, Joerg Jaspert wrote:
 source-only uploads
 ---
 After some discussion about this, there are two opinions within the
 ftp-team about this matter.  Given that other distros experience has
 shown that allowing source only uploads results in a huge loss of
 quality checks and an increased load on the buildds from packages
 FTBFSing everywhere,

Is there any quantitative evidence of this, or is it purely anecdotal?  I
guess this is mostly based on the Ubuntu experience, where it was a cause
for grumbling early in Ubuntu's history that people were uploading
completely untested packages and causing problems, but I'm not sure that
this is a significant problem in Ubuntu today; I think packages in Ubuntu
are at least as likely to FTBFS because of either skew between the archive
and the set of packages being tested with (which is a problem that will
affect Debian as well) or because the package has been imported from Debian
in an unbuildable state (which points to a latent FTBFS bug in Debian).

I'm not asserting that this problem is *not* significant, I simply don't
know - and am interested in knowing if anyone has more data on this beyond
some four-year-old anecdotes.  Certainly, Debian with its wider range of
ports is more likely to run into problems because of this than Ubuntu, and
so will need to be fairly cautious.

-- 
Steve Langasek   Give me a lever long enough and a Free OS
Debian Developer   to set it on, and I can move the world.
Ubuntu Developerhttp://www.debian.org/
slanga...@ubuntu.com vor...@debian.org


signature.asc
Description: Digital signature


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Mike Hommey
On Mon, Nov 16, 2009 at 12:48:53AM +0100, Joerg Jaspert wrote:
 On 11936 March 1977, Charles Plessy wrote:
 
  source-only uploads
  I am curious on how the rebuild of the architecture-independant packages
  happens.
 
 That depends on what we get out with in the end.
 Probably all buildds can build arch:all (so the buildd maintainer wants it),
 and there will be a new control field Build-Architecture for the
 arch:all ones, for the few cases where it is mandantory to build on one
 specific architecture.

Another obvious check would be for the Build-Depends-Indep packages to
be available on the building architecture.

Mike


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Bits from the FTPMaster meeting

2009-11-15 Thread Robert Collins
On Sun, 2009-11-15 at 19:29 -0600, Steve Langasek wrote:

 I'm not asserting that this problem is *not* significant, I simply don't
 know - and am interested in knowing if anyone has more data on this beyond
 some four-year-old anecdotes.  Certainly, Debian with its wider range of
 ports is more likely to run into problems because of this than Ubuntu, and
 so will need to be fairly cautious.

I'd have assumed that ports will have no effect on this: Debian only
uploads one binary arch (from the maintainer) anyway :- only builds on
that arch will be directly affected except in the case of a build
failure that the maintainer could have caught locally.

-Rob


signature.asc
Description: This is a digitally signed message part


Re: Bits from the FTPMaster meeting

2009-11-15 Thread Goswin von Brederlow
Steve Langasek vor...@debian.org writes:

 Hello,

 On Sun, Nov 15, 2009 at 04:15:35PM +0100, Joerg Jaspert wrote:
 source-only uploads
 ---
 After some discussion about this, there are two opinions within the
 ftp-team about this matter.  Given that other distros experience has
 shown that allowing source only uploads results in a huge loss of
 quality checks and an increased load on the buildds from packages
 FTBFSing everywhere,

 Is there any quantitative evidence of this, or is it purely anecdotal?  I
 guess this is mostly based on the Ubuntu experience, where it was a cause
 for grumbling early in Ubuntu's history that people were uploading
 completely untested packages and causing problems, but I'm not sure that
 this is a significant problem in Ubuntu today; I think packages in Ubuntu
 are at least as likely to FTBFS because of either skew between the archive
 and the set of packages being tested with (which is a problem that will
 affect Debian as well) or because the package has been imported from Debian
 in an unbuildable state (which points to a latent FTBFS bug in Debian).

 I'm not asserting that this problem is *not* significant, I simply don't
 know - and am interested in knowing if anyone has more data on this beyond
 some four-year-old anecdotes.  Certainly, Debian with its wider range of
 ports is more likely to run into problems because of this than Ubuntu, and
 so will need to be fairly cautious.

I don't think the number of ports will have any meaning here. If the
package is too broken to build/work on the maintainers architecture it
will most likely be broken on all archs. On the other hand if it works
on the maintainers architecture then testing or no testing makes no
difference to the other ports.

It seems to me the only port that MIGHT suffer quality issues is the
one the maintainer uses. Meaning i386 or amd64 usualy and Ubuntu
already has experience there.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to debian-devel-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org