Re: COUNT(buildd) IN (2,3) (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Stephen Gran
This one time, at band camp, Sven Luther said:
 On Mon, Mar 14, 2005 at 05:03:30PM +0100, David Schmitt wrote:
  
  Thus the problem is less in the development and more in the support
  of testing requirements (all arches in sync) and stable support
  (security response time). Therefore the N=2 requirement is only
  needed for tier-1 arches but not for the tier-2 which will not
  officially release a stable.
 
 What is the detailed reasoning for this requirement anyway ? 

I thought that was fairly clear - a 12 day build of a security fix is
unacceptable, especially since it hampers getting that fix out the door
for everyone else.

 And would a ten-way redundant distcc cluster count as one machine ? 

I would certainly interpret it that way, and hopefully the people behind
the proposal would as well.

Take care,
-- 
 -
|   ,''`.Stephen Gran |
|  : :' :[EMAIL PROTECTED] |
|  `. `'Debian user, admin, and developer |
|`- http://www.debian.org |
 -


pgpz6T0L5nV5U.pgp
Description: PGP signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Daniel Burrows
On Monday 14 March 2005 07:49 am, Hamish Moffatt wrote:
 Sure. Who's doing that on anything but i386/amd64/powerpc?

  Yes, I'm sure all those s390 users are running it on a machine in their 
basements... ;-)

  Daniel

-- 
/--- Daniel Burrows [EMAIL PROTECTED] --\
|   But what *does* kill me bloody well leaves me dead!   |
| -- Terry Pratchett, _Carpe Jugulum_   |
\ Evil Overlord, Inc: http://www.eviloverlord.com --/


pgpweQ3xRkQbl.pgp
Description: PGP signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Steve Langasek [EMAIL PROTECTED] writes:

 On Sun, Mar 13, 2005 at 10:47:15PM -0800, Thomas Bushnell BSG wrote:
 Steve Langasek [EMAIL PROTECTED] writes:

  The sh and hurd-i386 ports don't currently meet the SCC requirements, as
  neither has a running autobuilder or is keeping up with new packages.

 It is impossible for any port under development to meet the SCC
 requirements.  We need a place for such ports.  Where will it be?

 On the contrary, the amd64 port does, and is currently maintained
 completely outside official debian.org infrastructure.

 -- 
 Steve Langasek
 postmodern programmer

With major problems due to the insane high space, bandwith and cpu
load requirement on alioth for this.

Every upcoming port will need ~10Gb source archive and ~10Gb
binaries (half that for 50%). It is also likely that once a port gets
past say 20% it quickly goes all the way to 90+% (unless it is hurd).

It would be nice to have a common place for upcoming ports. A common
source mirror with a reduces set of packages that make up the base
system, build-essentials and build important packages. They wouldn't
need to include gnome and kde and such. Once a port reaches a point
where it needs gnome/kde it should be ready to enter scc.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Thomas Bushnell BSG [EMAIL PROTECTED] writes:

 Steve Langasek [EMAIL PROTECTED] writes:

 The point is that the ftpmasters don't want to play host to various
 ports that *aren't* yet matured to the point of usability, where being
 able to run a buildd is regarded as a key element of usability in the
 port bootstrapping process.  The amd64 team have certainly shown that
 it's possible to get to that point without being distributed from the
 main debian.org mirror network.

 Speaking of the mirror network is a red-herring.  Mirrors are not
 forced to distribute every arch; they can and should eliminate archs
 they aren't interested in distributing.

They are. That is mirror policy for primary mirrors. That is the
reason why amd64 is not in sid and consequently not in sarge.

Instead of dropping archs from debian mirrors should be allowed to do
partial mirrors. That would solve the space and bandwith problems for
mirrors without adverse effects to the project as such.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Mark Brown
On Mon, Mar 14, 2005 at 02:57:25PM +0100, Marc Haber wrote:

 Considered that ftbfs bugs for scc architectures are not going to be
 RC any more, people will stop fixing them, thus the scc architectures

Some may, but some would continue to be helpful.  My experience doing
porting work was that problems with portability and getting portability
bugs fixed were more tied up with the general quality of the package and
how well it was looked after than with anything else.  

Then again, I was mostly providing fixes for things along with the
report.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Package flow scenarios (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread David Schmitt
[Sven, pPlease teach you and your mutt the use of reply-to-list]

On Monday 14 March 2005 14:06, Sven Luther wrote:
 On Mon, Mar 14, 2005 at 01:02:34PM +0100, David Schmitt wrote:
[...]
 No, you didn't understand.

You are right.

 let's tell the plan again : 

   1) people upload to unstable always. Only source are considered, and
 people not having tested them and upload unbuildable sources are utherly
 flamed for their lack of discern :).

   2) the autobuilder build those packages for unstable for the tier 1
 arches.

   3) after some time, the packages are moved to testig, as done by the
 testing script for the tier 1 arches.

   4) the tier 2 arches build their stuff from testing. there are two
 results of this :

 4.1) the package builds without problem, it is added to the tier 2
 archive.

 4.2) the package fails to build. This used to be a RC critical FTBFS,
 but is not so anymore. The porter are responsible for fixing the bug and
 uploading a fixed package to unstable, as they do now.

Wouldn't it be better: The porter are responsible for fixing the bug and 
supplying a patch? Of course, in the case of unresponsive maintainers, there 
is always the possibility of an NMU, but this shouldn't be the norm - not 
even with tier-2 arches.

   4.2.1) the unstable built package passes testing rather quickly, and
 is then rebuild for the tier 2 arches, back to 4).

   4.2.2) the unstable built package is held out of testing for whatever
   not tier2 arch relevant issue. They can then be built in an
   arch-specific way, and uploaded directly to the arch in question, or
   maybe through a arch-specific-mini-testing-script.

Careful: this would touch on binary packages must be built from the 
unmodified Debian source (required, among other reasons, for license 
compliance) from the Nybbles proposal.

[benefits moved downwards for discussion]

 Now, given this full description, does my proposal seem more reasonable ?

Wow. I envy your ability to churn out such masses of mostly sane emails. Let 
me contrast this to my mind model of how Debian-flex will work in the case of 
a FTBFS on a tier-2 arch:

1) upload to unstable
2) autobuild for all tier-1 and 2 arches
   2.1) packages builds without problem: goto 4)
   2.2) FTBFS on tier-2 arch - FTBFS bug+patch
2.2.1) maintainer applies patch with high priority: goto 3)
2.2.2) maintainer applies patch with low priority: goto 4)
2.2.3) maintainer doesn't apply the patch: goto 6)

3) package is reuploaded with porters-fix: goto 1)

4) package propagates to testing without regard to tier-2 FTBFS
5) maintainer uploads next version with porters-fix: goto 1)

6) package probably needs a porter-NMU

 This would have the benefit of :

   - Not having slower arches hold up testing.

Check.

   - not overloading the testing scripts.

Check.

   - allow the tier 2 arches to have the benefit of testing, that is an
 archive with packages suffering from RC bugs and breakage-of-the-day, as if
 they build from unstable.

All packages passing 2.1 and 4 would be eligible for a tier-2 testing. I have 
faith that the current discussion of the Nybbles proposal will lead to a 
structure allowing this.

   - diminish the workload for the tier 2 autobuilders, since they only have
 to build proven good packages, and not random stuff going in unstable. -
 still allow the tier 2 arches to be part of debian, and hope for a sarge
 release, which yields to :

The Nybbles proposal explicitly says:  They will be released with sarge, with 
all that implies (including security support until sarge is archived)

   5) Once a stable release is done, the above can be repeated by the tier 2
   arches, until they obtain the release quality and maybe be part of a
 future stable point release.

If this can be archived properly (with fast security and stuff), then the arch 
could also reach tier-1 status (probably without ftp.d.o distribution)

  Obviously britney/dak is available from cvs.d.o and meanwhile also as
  debian package. So the question for me (administrating two sparc boxes)
  is why _we_ don't setup our own testing when obviously the ftp-masters
  and core release masters are not willing to do the work for us?

 I guess this is also the message i get from them. The same happens for NEW
 processing, and the solution is to setup our own unofficial archive, thus
 leading to the split and maybe future fork of debian.

This is a dangerous time for you, when you will be tempted by the Dark Side 
of the Force.

Let's structure that again in a list. That helps me thinking.

1) tier-2 will have its own resources[1] and support/development team 
(currently porters).

NEW processing:
2) NEW processing will happen for Arch: all/any packages anyways
3) NEW Arch: tier-2-only packages can be judged by people from 1), because 
they have no impact on tier-1 (obviously)

Forking:
4) forking a package would revoke its eligibility for tier-2 (binary 

Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Sven Luther [EMAIL PROTECTED] writes:

 I don't really understand that point though, since the plan is to drop mirror
 support for all minor arches, what does it cost to have a 3 level archive
 support : 

   1) tier 1 arches, fully mirrored and released.

One full set of sources, 10G.

   2) tier 2 arches, mostly those that we are dropping, maybe mirrored from
   scc.debian.org in a secondary mirror network. (why not ftp.debian.org/scc
   though ?).

Second set of identical sources, +10G == 20G.

   3) tier 3 arches, or in development arches, available on
   ftp.debian.org/in-devel or something.

Third set of identical sources, +10G == 30G.

Only if all 3 are on the same server can the sources be hardlinked and
getting those hardlinks preserved to mirrors is tricky.

 I don't see how having the in-devel arches be hosted on alioth
 instead on the official debian ftp server would cause a problem.

 Also, i don't understand why scc.debian.org is better than ftp.debian.org/scc,
 really, ideally we could have /debian, /debian-scc, and /debian-devel or
 something such. Is it really a physical problem fro ftp-master to held all
 these roles ? What is it exactly that ftp-masters want to drop all these
 arches for ? 

 Mirrors could then chose to go with 1) only (most of them will), or also
 mirror 2) and/or 3).

Why not just /debian as we have now. That means all sources are in
debian/pool/ just once. And mirrors can choose to exclude archs from
the mirrors as many (non primary) mirrors already do. The know-how for
partial mirrors is there and nothing needs to be invented for it.

I fail to see why the mirror situation should have an (changing)
impact on the archive layout and I fail to see how splitting the
archive, and thereby duplicating sources, helps mirrors that want
more than just i386/ppc/amd64.

 Friendly,

 Sven Luther

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Security Support and other reasoning (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread David Schmitt
On Monday 14 March 2005 14:06, Sven Luther wrote:
  My answer is that I don't care enough for tow out of 15 boxes for the
  hassle, I will update them to sarge, be grateful for the gracetime given
  and - iff nobody steps up to do the necessary porting and security work -
  donate them to Debian when etchs release leaves my current nameserver
  without security updates.
 
  What would you say, if I asked you to provide security support for sparc
  because _I_ need it for my nameservers?

 There was no comment from the security team about this new plan, we don't
 know for sure that this is the problem, we don't even know in detail what
 the problems are and how do they relate to the drastic solutions (in france
 we would say horse-remedies) proposed here.

The problem I - as a system administrator - see is that waiting a week for a 
security update might be not acceptable.

Of course there are many scenarios where there is no need for such tight 
security, but it seems only honest to make the difference obvious?

  to put down hard, objective and verifyable rules where everyone can
  decide whether an arch deserves use of central Debian resources like
  mirrorspace on the central network.

 But why, and is it the good/best solution ? Why did they not consult with
 the arch porters before hand ? Why did they not put the announcement in a
 more diplomatic and inviting way ?

We are all only humans? We are all emotionally laden? 

I think putting down rules under which circumstances a arch is eligible for 
tier-1 is a good thing. This reminds me to the oft-cited We hide no 
problems: for some, a week waiting until a security update is built _is_ a 
serious problem, for others shlib-skew and testing propagation are, others 
again need a working installer.

Taken together these seem to make the difference between tier-1 and 2.


Regards, David
-- 
- hallo... wie gehts heute?
- *hust* gut *rotz* *keuch*
- gott sei dank kommunizieren wir über ein septisches medium ;)
 -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15



Mirror Network (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread David Schmitt
On Monday 14 March 2005 18:11, Goswin von Brederlow wrote:
 Thomas Bushnell BSG [EMAIL PROTECTED] writes:
  Speaking of the mirror network is a red-herring.  Mirrors are not
  forced to distribute every arch; they can and should eliminate archs
  they aren't interested in distributing.

 They are. That is mirror policy for primary mirrors. That is the
 reason why amd64 is not in sid and consequently not in sarge.

 Instead of dropping archs from debian mirrors should be allowed to do
 partial mirrors. That would solve the space and bandwith problems for
 mirrors without adverse effects to the project as such.

And would break d-i, which currently provides a list of mirrors to choose 
from.

Also notably, distribution on the main-mirror network is neither a requirement 
for nor a part of being in tier-1

Regards, David
-- 
- hallo... wie gehts heute?
- *hust* gut *rotz* *keuch*
- gott sei dank kommunizieren wir über ein septisches medium ;)
 -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Steve Langasek [EMAIL PROTECTED] writes:

 Hi Andreas,

 On Mon, Mar 14, 2005 at 07:37:51AM +0100, Andreas Tille wrote:
 On Sun, 13 Mar 2005, Steve Langasek wrote:

 IMHO all these facts with exception of those social facts I marked (?)
 are fullfilled by Sparc.

 For reference, the killer for most archs is the 98% built criterion;
 from today's numbers:

 wanna-build stats:
   alpha:97.57% up-to-date,  97.59% if also counting uploaded pkgs
   arm:  92.12% up-to-date,  92.15% if also counting uploaded pkgs
   hppa: 97.66% up-to-date,  97.68% if also counting uploaded pkgs
   hurd-i386:35.59% up-to-date,  35.59% if also counting uploaded pkgs
   i386: 99.83% up-to-date,  99.83% if also counting uploaded pkgs
   ia64: 97.39% up-to-date,  97.41% if also counting uploaded pkgs
   m68k: 97.75% up-to-date,  97.86% if also counting uploaded pkgs
   mips: 96.74% up-to-date,  96.79% if also counting uploaded pkgs
   mipsel:   93.01% up-to-date,  93.01% if also counting uploaded pkgs
   powerpc:  97.99% up-to-date,  98.00% if also counting uploaded pkgs
   s390: 94.31% up-to-date,  94.31% if also counting uploaded pkgs
   sparc:95.77% up-to-date,  95.77% if also counting uploaded pkgs

 [curious that ia64 is lower than some others right now -- when we looked
 last week, it was above 98%, so maybe etch would have a *different* 4
 architectures. ;)]

Please remove all contib/non-free packages, remove packages in p-a-s
that were previously build and remain in the w-b db, and then use the
total of main packages not excluded by p-a-s as total for the % count.

Unless you do that those numbers are just wrong and misleading. Both
the compiled and the up-to-date graphs on buildd.d.o suffer from this
miscounting.

 This may be fixable for one or more architectures for etch by getting a
 handle on any existing buildd problems, which I'd personally be happy to
 see happen, but we also have to consider that there are some bits (like
 ftp-master.d.o itself, as well as toolchains/kernel synchronization)
 that just don't scale well to the number of architectures we currently
 have.

 The 98% mark may seem a little high, but in fact it's not.  Note that
 there's been a thread on debian-devel just this week about autobuilders
 holding up release-critical bugfixes, and the architectures in question
 are still well above 90% -- and they're not the only architectures
 delaying RC fixes, they're just the ones that stand out enough to flame
 about.  Based on the experience we've had trying to keep sarge on track,
 setting the barrier high for etch is definitely in our best interest,
 IMHO.

Which is purely a problem of starvation in the needs-build
queue. Something that would be a few days delay with a fifo turns into
a month without any build attempt.

Anyway, this goes towards the 2 buildds must be able to keep up with
etch criteria for excluding archs.

 For the specific case of sparc, it's worth noting that this architecture
 was tied for last place (with arm) in terms of getting the ABI-changing
 security updates for the kernel; it took over 2 months to get all
 kernel-image packages updated for both 2.4 and 2.6 (which is a fairly
 realistic scenario, since woody also shipped with both 2.2 and 2.4),
 which is just way too unresponsive.  The call for sparc/arm kernel folks
 in the last release update was intended to address this; correct me if
 I'm wrong, but to my knowledge, no one else has stepped forward to help
 the kernel team manage the sparc kernels.

That is a realy good argument for excluding an arch. Much better than
pulling some magic 98% or 2 buildds out of a hat and hope that keeps
them out.

 - there must be a sufficient user base to justify inclusion on all
  mirrors, defined as 10% of downloads over a sampled set of mirrors
 Hmmm, regarding to the fact that i386 makes a real lot of percentage it
 might be a quite hard limit to have 10%.  I guess this reduces the number
 of supported architectures by fitting to a previousely defined number
 of architectures we are willing to support.

 This point is *not* about supported architectures, only about
 architectures carried by the primary mirror network.  We did consider
 having a single set of requirements for both release architectures and
 primary mirror architectures, and the structure of the announcement
 might still reflect that, but I couldn't justify using percent market
 share as a straight criterion for release architectures.

Release should be governed by the amount of developers, if the can
keep up, if the buildd works and so on. *Quality*

Mirroring should be governed by the amount of users (as in downloads),
the amount of traffic for an arch. No point having more mirrors than
users. *Quantity*

There might be 100 firms downloading to their proxy and maintaining 1
million s390 systems (VMs) with 10 million users. Does s390 then get
kicked out of the release because they 

Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Joel Aelwyn
On Mon, Mar 14, 2005 at 04:44:27PM +0100, David Schmitt wrote:
 
 In my reading of the proposal, not-tier-1 arches will receive appropriate 
 space and resources off the main mirror network if they can demonstrate 
 viability (working buildd, basic unix functionality, compiled 50%, 5 
 developers, 50 users) and DFSG-ness (freely usable, unmodified Debian 
 source). As far as I can see all current official Debian arches fulfill these 
 criteria. For the in-development arches like k*bsd with a handful of 
 developers and a extremly small userbase other solutions are already used.

hat mode=on type=Nienna porter

Amd64 is the only development arch I know of using Alioth, and, well,
folks have already said that's proven to be an issue for various reasons.
The only existing copy of the Nienna archive is behind a dynamic cable
hookup, which works great so long as it's just a couple of people hacking
on it but won't scale well should we manage to get the base system clean
(the majority of NetBSD port issues are 'core' things like a different
concept of passwd/shadow management that require design and often code to
integrate with a Debian-policy-compliant system).

Once it's possible to actually run debootstrap / pbuilder on the system,
I expect the number of packages will jump upwards pretty sharply. I'd
love to have a better answer for archiving by that time. I don't think
that would need to be (or even, frankly, SHOULD be) mirrored except by
someone's personal desire to do so; the gap from devel arch to SCC is
narrow enough, once you have enough users that it would be a noticeable
load, that it shouldn't take long to get promoted that far.

/hat

Anyway. The other comment I have, for the Release team and ftpmaster team,
is: thanks. Having the concrete information on what is expected of an arch
before it can reach certain stages is extremely useful, in that it means I
won't have to wonder when I should be pestering anyone.

(BTW, does this mean the wishlist ftp.d.o bug for arch creation should be
closed for the moment, since I don't anticipate it reaching even SCC status
for some while, yet?)
-- 
Joel Aelwyn [EMAIL PROTECTED]   ,''`.
 : :' :
 `. `'
   `-


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thiemo Seufer
Tollef Fog Heen wrote:
 * Thiemo Seufer 
 
 | For anyone who uses Debian as base of a commercial solution it is a
 | requirement. Grabing some random unstable snapshot is a non-starter.
 
 You do realise this is exactly what Ubuntu is doing?  (Grab «random»
 snapshot; stabilise)

The stabilise is the missing part in the proposal. Stabilization and
security would need to be done outside Debian. That's a serious amount
of work, likely to be multiplied several times because there's no
release candidate to collect it in one place.


Thiemo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matt Zimmerman
On Mon, Mar 14, 2005 at 04:51:30PM +0100, Marc Haber wrote:

 On Mon, 14 Mar 2005 15:41:16 +, Scott James Remnant
 [EMAIL PROTECTED] wrote:
 Are you thinking of any particular developers here? 
 
 For example, it suspiciously looks like the Security Team only has one
 public active member, Martin Schulze, since at least October 2004.
 
 As far as I know, the other people being publicly visible as active
 members of the Security Team in the time before October 2004, are now
 working for Ubuntu.

I assume that you are referring to me, though I'm puzzled by your use of the
plural form.

Of course, the same thing has happened in the past many times, both within
the security team and elsewhere, with different people filling the various
roles, and without Ubuntu as a convenient scapegoat.  Indeed this happens
all the time, derived from the fact that Debian work is tied very closely to
leisure time for nearly all of us.

Fortunately, the reality in this case is less bleak than your description,
and in some ways it is better than it has been in the past when people
become less active.

- Primarily behind the scenes, Steve Kemp is involved with security
  operations, actively supporting the security team in a secretary role

- Both in public[0] and in private, Martin Pitt (acting in an Ubuntu role)
  and others from the Ubuntu community have been collaborating with the
  Debian security team on patches for a wide range of vulnerabilities

- In Debbugs, Joey Hess and others have been helping security fixes to flow
  from Ubuntu to Debian unstable

The latter two points reflect how Debian is benefiting in some very direct
ways from security work being done in Ubuntu.

[0] http://lists.ubuntu.com/mailman/listinfo/security-review

-- 
 - mdz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Sven Luther [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 02:12:48AM -0800, Thomas Bushnell BSG wrote:
 Sven Luther [EMAIL PROTECTED] writes:
 
  BTW, how much of the human intervention needed for buildd signing
  plays in the delays you see, and did you discuss the possibiliity of
  a fully autobuilder setup, like ubuntu does and i advocated years
  ago ?
 
 I can't answer for Steve, but it seems to me that signing isn't the
 problem.  There is not a big delay in getting packages signed; the
 delay is much more frequently getting them actually built.

 Well, i will disagree. It is only needed for the signer to go out to
 vacations, and uploads break for a couple of days or weeks, which immediately
 break builds of packages dependent on said packages, and cause delay in the
 build queue, especially on slower arches. This is not theoretical, it happened
 to me already in the past.

Which is a realy big argument for having multiple admins and even
sharing the signing priviliges between buildd admins. For multibuild
an idea was to automatically mail the other buildd admins when the
primary admin fails to act within a certain time.

As I understand it the w-b team is against multiple admins per arch or
even buildd making sickness, accidents or vacation much more critical.

 Where human delay did come into play was in getting the xfree86 mess
 cleaned; in theory it should have taken one or two days, but in
 practice it took much longer.

 Why not fully eliminate the human factor ? Ubuntu does automated build from
 source only uploads, the package sources are built and signed by a developer,
 autobuilt on all arches, and i don't believe they are individually signed
 after that.

Security reasons?

What would have helped in the xfree86 mess and many others would be
the buildd rebuilding the chroot from scratch (or backup) after
errors. Also a feature proposed for multibuild.

 Friendly,

 Sven Luther

Yes, this screams for multibuild to replace wanna-build. Sorry for the
advertisment of the still mostly vapourware.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Wouter Verhelst [EMAIL PROTECTED] writes:

 Op ma, 14-03-2005 te 12:38 +0100, schreef David Schmitt:
 On Monday 14 March 2005 11:28, Thomas Bushnell BSG wrote:
  In this case, it was a bug that required human intervention, a package
  upload that accidentally would hose a chroot, which required the
  chroot to be repaired for each affected buildd.
 
 Even that can be mitigated by debootstrapping the chroot once a day 
 automatically.

 Not really. You are severely underestimating the time it takes to do
 that on the slower architectures.

Make it every 50 builds or on error. Slower archs will do it less
often (time wise).

Also for archs that can keep up with etch with max 2 buildds that
should hardly be a problem. For those running debootstrap or unpacking
a chroot.tar.gz is a matter a minute.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread David Schmitt
On Monday 14 March 2005 18:37, Goswin von Brederlow wrote:
 Steve Langasek [EMAIL PROTECTED] writes:
  This point is *not* about supported architectures, only about
  architectures carried by the primary mirror network.  We did consider
  having a single set of requirements for both release architectures and
  primary mirror architectures, and the structure of the announcement
  might still reflect that, but I couldn't justify using percent market
  share as a straight criterion for release architectures.

 Release should be governed by the amount of developers, if the can
 keep up, if the buildd works and so on. *Quality*

 Mirroring should be governed by the amount of users (as in downloads),
 the amount of traffic for an arch. No point having more mirrors than
 users. *Quantity*

 There might be 100 firms downloading to their proxy and maintaining 1
 million s390 systems (VMs) with 10 million users. Does s390 then get
 kicked out of the release because they download efficiently, only 100
 downloads instead of 10 million?

To highlight Steves most important sentence:
| This point is *not* about supported architectures, only about
| architectures carried by the primary mirror network.

And if s390 only needs 100 downloads per year, the don't need to be 
distributed on the mirrors, but can easily download from a central site.

What a long ways to Yes, you are right.

Regards, David
-- 
- hallo... wie gehts heute?
- *hust* gut *rotz* *keuch*
- gott sei dank kommunizieren wir über ein septisches medium ;)
 -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 04:04:45PM +0100, Roland Mas wrote:
 - d-i, especially the kernel problems: okay, so there the
   arch-specific kernels have played a role.

Future (post sarge) kernels will have one kernel package only, which will
build all arches, and possibly even all .udebs, like the ubuntu kernel does,
so this will be a moot point in the future, and the new kernel-team is rather
fit and responsive, and welcoming of help.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Bill Allombert
On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
 Therefore, we're planning on not releasing most of the minor architectures
 starting with etch.  They will be released with sarge, with all that
 implies (including security support until sarge is archived), but they
 would no longer be included in testing.
 
 This is a very large step, and while we've discussed it fairly thoroughly
 and think we've got most of the bugs worked out, we'd appreciate hearing
 any comments you might have.

I assume such a drastic step come from a lack of man-power to perform
some key tasks.

Why not start a campaign to recruit more developers in the specific team
that need more man-power ? This was done by the Security team and the
Release team with much success.

In the last few years, we have seen the creation of team to allow to
share works between developers, like the KDE team, the GNOME team, the
kernel team and the release team (and dozen less known teams) in part
thanks to the introduction of alioth which has enabled us to be more
productive. We have a new installer architecture that should be more
maintainable.

I think we have fixed 99% of the problems of supporting 12 architecture.
Renouncing so close to the goal without even trying seems really backward. 

Cheers,
-- 
Bill. [EMAIL PROTECTED]

[How I can write such calm email after the shock I had reading the
nybbles is beyond me. Do not assume then, that I have no strong feeling
on the issue: I value highly the stable releases on non-mainstream archs]

Imagine a large red swirl here... before it vanish ?


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Rene Engelhard [EMAIL PROTECTED] writes:

 Hi,

 Am Montag, 14. März 2005 08:36 schrieb Steve Langasek:
 wanna-build stats:
   i386: 99.83% up-to-date,  99.83% if also counting uploaded pkgs
   ia64: 97.39% up-to-date,  97.41% if also counting uploaded pkgs
   powerpc:  97.99% up-to-date,  98.00% if also counting uploaded pkgs

 [curious that ia64 is lower than some others right now -- when we looked
 last week, it was above 98%, so maybe etch would have a *different* 4
 architectures. ;)]

 where we see that two of the most important archs in that list (i386, ppc).

 pcc is barely at 98%. I don't think that barrier should be that high. We 
 *should* at last release with the tree most important archs: i386, amd64, 
 powerpc.

[EMAIL PROTECTED]:~/archive/pure64$ wanna-build/wb-cmd status
  Known packages:8931 (   9027) 
  Installed packages:8743
Needs-build packages:   0
   Dep-Wait packages:  50   - lots on !amd64 or non-free
 Broken packages:  59   - should be in p-a-s till ported
 Not-For-Us packages:  96   - will not be ported, p-a-s
   Building packages:   1
  1 fschueler-guest
  Uploading packages:   7
  7 fschueler-guest
 Failed packages:  71
 71 fschueler-guest

Total for amd64 should be: 8931 - 59 - 96 == 8776
Packages total up-to-date: 8743 / 8931== 98.895% (~ what buildd.d.o shows)
Packages up-to-date  : 8743 / 8776== 99.623% (what buildd.d.o says it 
shows)

Amd64 easily makes the 98% line.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Marc Haber
On Mon, 14 Mar 2005 11:27:11 -0500, David Nusinow
[EMAIL PROTECTED] wrote:
On Mon, Mar 14, 2005 at 04:06:39PM +, Alastair McKinstry wrote:
 will Security releases be available?

Explicitly no, unless the porters themselves handle them.

Will early-release information be available to the porters? Or do
porters only start building their security updates once the official
advisory has gone out?

Not necessarily. I imagine it such that the porters set up their own pull from
unstable, the same way amd64 does now. They can set up testing themselves
(remember, dak is in the archive now) so they can run their own testing in
parallel to the mainline one.

What a huge waste of manpower, done seven times in a row.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber |Questions are the | Mailadresse im Header
Mannheim, Germany  | Beginning of Wisdom  | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG Rightful Heir | Fon: *49 621 72739834



Call for help / release criteria (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread David Schmitt
On Monday 14 March 2005 17:31, Aurélien Jarno wrote:
 Frank Küster a écrit :
  - First of all, we should take the details as a starting point for
discussion, not as a decision that has made.  Nevertheless, we must
take into account that there are reasons for it: The people doing the
release, ftpmaster, etc. work noticed that they cannot go on like
before.

 Why they don't ask for help?

They do so now. Are you (all) prepared to take up the call?

While the slower arches are not able to fulfil quality requirements for a top 
notch stable release with security support and everything, they surely should 
be able to provide their binaries as available in tier-2.

I also have faith, that the security team will run security updates through 
tier-2 buildds but without delaying the DSAs for tier-1 arches and without 
having to do overnighters or something if a build fails on a tier-2 arch.


Regards, David
-- 
- hallo... wie gehts heute?
- *hust* gut *rotz* *keuch*
- gott sei dank kommunizieren wir über ein septisches medium ;)
 -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread cobaco (aka Bart Cornelis)
On Monday 14 March 2005 17:46, Thiemo Seufer wrote:
 John Goerzen wrote:
 [snip]

   - the release architecture must have N+1 buildds where N is the
   number required to keep up with the volume of uploaded packages
  
   - the value of N above must not be  2
 
  It seems to me that if an arch can keep up with builds, why impose this
  artificial restriction?

 I guess in order to have an assured minimum build time for critical
 packages like security updates.

if that's the point it would be a lot simpler to simply say 
security updates will be announced x days after they enter the queue

- no waiting on architectures that are slow
- no dropping any arches that manage to keep up (regardless of wether they 
are used by a large percentage of users or not)
-- 
Cheers, cobaco (aka Bart Cornelis)
  
1. Encrypted mail preferred (GPG KeyID: 0x86624ABB)
2. Plain-text mail recommended since I move html and double
format mails to a low priority folder (they're mainly spam)


pgpVoZoHP9VjD.pgp
Description: PGP signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 01:51:02PM -0300, Humberto Massa wrote:
 Matthias Urlichs wrote:
 
 With a decent toolset, doing a security package for 10 architectures
 should be a nearly-constant amount of work, no matter which base the
 number 10 is written in.
 
 Speaking of which, can anyone here explain to me why does a two-line 
 security fix on, say, KDE, makes things need to be recompiled for 12 
 days long? (!!!) One could think that there are more incremental ways of 
 dealing with recompilation of enourmous packages.

Not currently supported, and not really considered as a technical solution.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Hamish Moffatt [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 10:10:32AM +0100, Ingo Juergensmann wrote:
 All the work and support over all those years by all those users and porters
 will be vanished with that stupid idea, imho. 

 Ingo, obviously you are pissed off. But really, is there much benefit in
 making *releases* for the SCC architectures? 

A stable release or at least a testing is quite essential to many
users. Especialy with the struggeling archs sid is broken for many
days on end with missing packages, broken dependencies, version skews
and and and.

If you want to install a full system from scratch on an scc arch you
can basicaly give up. Something will not work that day and maybe not
all month.

The reason for this are the DAK's _all.deb handling and wanna-builds
non-fifo queue. Both drastically increase the problems to last days or
weeks.

The stable and testing releases through their extra consistency checks
don't have those problems.

 The packages will still be built and d-i maintained as long as there are
 porters interested in doing that work for the architecture. The only
 difference is that those architectures won't influence testing and they
 won't be officially released.

I can fully agree that scc archs should not hold back the official
release. But that does not mean the scc archs have to stop releasing,
stop security, stop testing nor does it require splitting them off to
another server.

The release, testing and security could just concentrate on the main
archs. The scc archs can keep their own testing (somewhat increasing
the size by having older sources linger longer) and security uploads
whenever their buildds are up for it.

Security releases for scc would be made by the porters or by a
security member willing to manage it. They don't have to hold up the
DSA but if they can make it in time they should be included in the
announcement.

A release for an scc arch could come weeks or month later than the
main archs and be done by the porters alone. The only requirement for
it would be that only stable sources can be used (with a few
exceptions maybe for arch specific sources). Stable sources that don't
build can't be in an scc release. E.g. no patching of kde to build on
XYZ after the main archs have released, kde would have to be removed
on that arch.

For scc testing I believe britney would have to be patched or run
seperately for each arch with the porters being in charge of adding
overrides for that arch. The normal testing team should not be
bothered by this though. If it is having no testing or working on
britney to support it many porters will probably help out.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Rene Engelhard
Hi,

Am Montag, 14. März 2005 18:58 schrieben Sie:
 Rene Engelhard [EMAIL PROTECTED] writes:
  pcc is barely at 98%. I don't think that barrier should be that high. We
  *should* at last release with the tree most important archs: i386, amd64,
  powerpc.

 [EMAIL PROTECTED]:~/archive/pure64$ wanna-build/wb-cmd status
[...]
 Amd64 easily makes the 98% line.

Did I doubt that? I guessed that. I just fear that powerpc may sometime go 
behind the 98% if it's now at *exactly* 98% if you count not yet uploaded 
packages. And we should *not* release without powerpc IMHO. (Sparc too, but 
that's another matter)

Regards,

Rene

-- 
 .''`.  René Engelhard -- Debian GNU/Linux Developer
 : :' : http://www.debian.org | http://people.debian.org/~rene/
 `. `'  [EMAIL PROTECTED] | GnuPG-Key ID: 248AEB73
   `-   Fingerprint: 41FA F208 28D4 7CA5 19BB  7AD9 F859 90B0 248A EB73



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread David Schmitt
On Monday 14 March 2005 12:45, Wouter Verhelst wrote:
 Op ma, 14-03-2005 te 12:38 +0100, schreef David Schmitt:
  On Monday 14 March 2005 11:28, Thomas Bushnell BSG wrote:
   In this case, it was a bug that required human intervention, a package
   upload that accidentally would hose a chroot, which required the
   chroot to be repaired for each affected buildd.
 
  Even that can be mitigated by debootstrapping the chroot once a day
  automatically.

 Not really. You are severely underestimating the time it takes to do
 that on the slower architectures.

A current pbuilder chroot takes 121 MB (containing build-essential already).

How long does a '(mv $chroot foo; rm -Rf foo  cp $stash $chroot)' take for 
121 MB on $small-arch? Please enlighten me, I am really interested since I 
(obviously) have no clue about the orders of magnitude of performance Debian 
runs on.


Regards, David
-- 
- hallo... wie gehts heute?
- *hust* gut *rotz* *keuch*
- gott sei dank kommunizieren wir ber ein septisches medium ;)
 -- Matthias Leeb, Uni f. angewandte Kunst, 2005-02-15



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread David Nusinow
On Mon, Mar 14, 2005 at 07:03:52PM +0100, Marc Haber wrote:
 Will early-release information be available to the porters? Or do
 porters only start building their security updates once the official
 advisory has gone out?

Why can't porters join the security team?
 
 Not necessarily. I imagine it such that the porters set up their own pull 
 from
 unstable, the same way amd64 does now. They can set up testing themselves
 (remember, dak is in the archive now) so they can run their own testing in
 parallel to the mainline one.
 What a huge waste of manpower, done seven times in a row.

Hopefully not that huge, as the tools have already been written. Perhaps there
can be a single package pool for all SCC/Tier-2 arches so it only has to be
done once.

 - David Nusinow


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 06:20:23PM +0100, Goswin von Brederlow wrote:
 Sven Luther [EMAIL PROTECTED] writes:
 
  I don't really understand that point though, since the plan is to drop 
  mirror
  support for all minor arches, what does it cost to have a 3 level archive
  support : 
 
1) tier 1 arches, fully mirrored and released.
 
 One full set of sources, 10G.
 
2) tier 2 arches, mostly those that we are dropping, maybe mirrored from
scc.debian.org in a secondary mirror network. (why not ftp.debian.org/scc
though ?).
 
 Second set of identical sources, +10G == 20G.
 
3) tier 3 arches, or in development arches, available on
ftp.debian.org/in-devel or something.
 
 Third set of identical sources, +10G == 30G.

Ah, no, nothing is stopping us from keeping the pool architecture for this,
just having different views of the stuff for different mirror networks to work
on.

 Only if all 3 are on the same server can the sources be hardlinked and
 getting those hardlinks preserved to mirrors is tricky.

Well, it just calls for smarther mirroring tricks.

Also, i do believe that not all mirrors carry experimental for example, and
said packages are in the pool all the same, or whatever.

  I don't see how having the in-devel arches be hosted on alioth
  instead on the official debian ftp server would cause a problem.
 
  Also, i don't understand why scc.debian.org is better than 
  ftp.debian.org/scc,
  really, ideally we could have /debian, /debian-scc, and /debian-devel or
  something such. Is it really a physical problem fro ftp-master to held all
  these roles ? What is it exactly that ftp-masters want to drop all these
  arches for ? 
 
  Mirrors could then chose to go with 1) only (most of them will), or also
  mirror 2) and/or 3).
 
 Why not just /debian as we have now. That means all sources are in
 debian/pool/ just once. And mirrors can choose to exclude archs from
 the mirrors as many (non primary) mirrors already do. The know-how for
 partial mirrors is there and nothing needs to be invented for it.

Yeah, that would be easiest, i was speaking about a logical separation of the
arches, not a physical one.

 I fail to see why the mirror situation should have an (changing)
 impact on the archive layout and I fail to see how splitting the
 archive, and thereby duplicating sources, helps mirrors that want
 more than just i386/ppc/amd64.

Thanks for your input

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Thiemo Seufer [EMAIL PROTECTED] writes:

 Thiemo Seufer wrote:
 Hamish Moffatt wrote:
  On Mon, Mar 14, 2005 at 12:06:18PM +, Martin Michlmayr wrote:
   * Hamish Moffatt [EMAIL PROTECTED] [2005-03-14 23:00]:
But really, is there much benefit in making *releases* for the SCC
architectures?
   
   For some SSC arches, it *might* not make a difference (possibly m68k)
   but others (e.g. s390 and mipsel) are typically used for servers
   or gateways, and you don't really want to run unstable in such
   environments.  testing+security updates might be a compromise, but
   unstable is clearly not an option for a S390 box or a mipsel Cobalt
   gateway.
  
  OK, that makes sense. Can you buy those architectures new? (Surely yes
  in the case of s390 at least, probably mipsel also as the mips CPU
  manufacturers are alive and well.)
 
 AFAIK m68k is the only one which isn't available anymore, unless you
 count its mmu-less variants in.

 Apparently even m68k is still available, as Wouter pointed out.


 Thiemo

I think the only criteria m68k fails are the 2 buildds have to
suffice to keep up with etch and the 10% download shares.

Hell, if it comes down to m68k dropping out because of not enough
downloads I will run while true; apt-get update; apt-get clean;
apt-get -d dist-upgrade; done. /irony

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Tollef Fog Heen [EMAIL PROTECTED] writes:

 * Hamish Moffatt 

 | OK, that makes sense. Can you buy those architectures new? (Surely yes
 | in the case of s390 at least, probably mipsel also as the mips CPU
 | manufacturers are alive and well.)

 [EMAIL PROTECTED]:~# uname -a
 Linux eetha 2.4.29 #1 Fri Mar 4 02:35:42 EST 2005 mips unknown

 This was bought about a week ago; a linksys WRT54GS.

Welcome to the club.

Is this actualy a mips or mipsel system or even switchable? I'm still
using mine with the original firmware.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Hamish Moffatt [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 01:17:03PM +0100, Ingo Juergensmann wrote:
 On Mon, Mar 14, 2005 at 11:00:22PM +1100, Hamish Moffatt wrote:
  But really, is there much benefit in
  making *releases* for the SCC architectures? 
 
 What will happen is something like this: 
 
 A: Oh, let's see what we got here a nice Alpha server...
 B: Let us install Debian on it!
 *browsing the web*
 A: Oh, no release of Debian for Alpha... it's unsupported...
 B: Sad... it's a nice machine, but without a working Linux on it, we're 
 gonna
 throw it away

 It's unsupported officially, but unstable is still available.

 The porters could do their own release if they wished.

Not realy. Scc only has unstable snapshots proposed. Thats no way
releasable.

 It would be interesting to hear how NetBSD/OpenBSD handles this
 situation, as they have a lot of ports. (Of course they have far fewer
 packages than we do so their problem is on a much smaller scale.)
 Do they release new versions on all ports at once?


 Hamish
 -- 
 Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 06:43:21PM +0100, Goswin von Brederlow wrote:
 Sven Luther [EMAIL PROTECTED] writes:
  Where human delay did come into play was in getting the xfree86 mess
  cleaned; in theory it should have taken one or two days, but in
  practice it took much longer.
 
  Why not fully eliminate the human factor ? Ubuntu does automated build from
  source only uploads, the package sources are built and signed by a 
  developer,
  autobuilt on all arches, and i don't believe they are individually signed
  after that.
 
 Security reasons?

Hum, ...

so the buildd admin really examine all the packages for deviation that a
compromised buildd could have incorporated before signing them ? Or that they
scan the machine for a compromise and always detect them before signing ? 

I seriously doubt that this may be a criteria, and as said, ubuntu does it, so
we could work something out as well if the will was there for that.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Christian Perrier

 Based on the last few hours only, I think you'll have lots of comments
 to meditate on :-)


Only if considering that a few dozen of people making a lot of noise
and thus making the thread absolutely impossible to read for people
with a normal life and health, represents the feeling of near 1000
developers...

/me, quite happy with the work of the people involved in this
announcement and confident in their ability to hear about the received
commentsthe hardest part probably being the need to remove useless
noise and rants

...but quite sad (or happy?) to see that nearly only the proposal to handle
architectures differently got criticism...while this proposal contains
several other key point.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Sarge release (Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Christian Perrier
Quoting Steve Langasek ([EMAIL PROTECTED]):
 Hello all,
 
 As promised earlier on -project[0], after the release team/ftpmaster
 team meeting-of-minds last weekend, we have some news to share with the
 rest of the project.
 
 First, the news for sarge.  As mentioned in the last release team


It looks like the giant noise generated by this mail (I sometimes
wonder whether some people are in Debian just to make noise and
criticize every action as long as it doesn't fit exactly with their
point of view.) has hidden the first topic : we nearly have a
realease schedule and sarge release is becoming more and more reality.

May I ask to people who have jumped on the architecture handling
topic to please consider also the great work made during this work
session about *other* topics and maybe just say something about it
also:)

Anyway, I take this opportunity to thank the involved people for their
time and work as well as their commitment to the project.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Thiemo Seufer [EMAIL PROTECTED] writes:

 Hamish Moffatt wrote:
 On Mon, Mar 14, 2005 at 01:33:16PM +0100, Thiemo Seufer wrote:
  Hamish Moffatt wrote:
   On Mon, Mar 14, 2005 at 10:10:32AM +0100, Ingo Juergensmann wrote:
All the work and support over all those years by all those users and 
porters
will be vanished with that stupid idea, imho. 
   
   Ingo, obviously you are pissed off. But really, is there much benefit in
   making *releases* for the SCC architectures? 
  
  For anyone who uses Debian as base of a commercial solution it is a
  requirement. Grabing some random unstable snapshot is a non-starter.
 
 Sure. Who's doing that on anything but i386/amd64/powerpc?

 Desktops/Laptops aren't the only computers, there are much more
 embedded systems around than you seem to think. E.g. 4G Systems uses
 Debian as base system. With the growth of embedded systems (32 MB RAM
 seems to be the common lower limit now) the use of full-fledged OS'es
 will increase.

 If you want to run Linux on such a mips/mipsel system today, you can
 either buy a Linux from Montavista/Redhat/..., or use Debian. The
 system is still unlikely to report popcon results if you do.


 Thiemo

The magix buildd on buildd.net that helped out last year when mipsel
had a huge backlog is just such a system. It's from mycable.de but
basicaly identical to 4G Systems (2 lans instead or lan+wireless).

Good enough to build qt and some other ~1000 debs on it.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Tollef Fog Heen
* John Goerzen 

| That makes sense, but it doesn't preclude, say, alpha making an etch
| release once the main 4-arch etch release is made.  They'd use the same
| set of source packages as the main release, even if they didn't track
| testing along the way.  When there are divergences, they should be small
| and minimal.

FWIW, this is more or less what the sarge amd64 release will be, AIUI.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thomas Bushnell BSG
Goswin von Brederlow [EMAIL PROTECTED] writes:

 Thomas Bushnell BSG [EMAIL PROTECTED] writes:
 
  Steve Langasek [EMAIL PROTECTED] writes:
 
  The point is that the ftpmasters don't want to play host to various
  ports that *aren't* yet matured to the point of usability, where being
  able to run a buildd is regarded as a key element of usability in the
  port bootstrapping process.  The amd64 team have certainly shown that
  it's possible to get to that point without being distributed from the
  main debian.org mirror network.
 
  Speaking of the mirror network is a red-herring.  Mirrors are not
  forced to distribute every arch; they can and should eliminate archs
  they aren't interested in distributing.
 
 They are. That is mirror policy for primary mirrors. That is the
 reason why amd64 is not in sid and consequently not in sarge.

Sorry, I'm speaking in term of possible future policies, not the
present.

Create i386.us.debian.org, powerpc.us.debian.org,
amd64.us.debian.org, etc.  Each of them points to the existing
mirrors.  Make the installer set up sources.list to specify those
names.  Mirrors can carry what they want, provided the DNS admin knows
what they are carrying.

It's about ten minutes work.

Thomas


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Ingo Juergensmann [EMAIL PROTECTED] wrote:

 Moreover, the criterias given in your mail are just so oriented
 towards/against some architectures, that it's a bad joke (I was going
 to write disgusting, really).

 It's a total change of direction: from as long as there are people who
 care, we will release those arch to no matter if there are people who
 care, we just release mainstream archs. :-(

Yes, this is a shame, moreover this is absolutely against all the
principles of the Project. A few people are dictating what to do next,
and that's just not how Debian works.

You'll figure out that the timing for this new policy is absolutely
perfect; we're a week away from the voting period for the new DPL
term. The current DPL can't (and won't, obviously) do anything about
it, and the candidates signed the proposal. I should add that the
Vancouver meeting was announced at the very last minute, too. And I'm
wondering, who paid for the travel expenses ? Did the people involved
pay out of their own pocket ? Did the Project pay ? Did somebody else
pay the bill ?
  
 I joined the Project because there was no core team to decide what was
 to be done in an authoritative manner. Today, we have such a core
 team, and I'm not sure I agree with that anymore (some would rather
 say it's a cabal, well, their call).

 And yet no chance to replace the cabal nor elect other people instead of
 them, which is more like a problem for the project than just a few archs,
 imho. 

We can.

I hereby ask the people involved in this proposal to step down
immediately from their positions in the Project. You've violated a
couple of rules already, and you've violated the spirit of this
Project.

 Congrats, folks, you've got what you've been after for several years
 now. Great job. Doorstop architectures, EXIT this way  --- []

 This proposal is really doing harm to Debians good reputation and I'm not
 sure if Debian is surviving this at all. 

It will. It just doesn't have to happen. Our call; we're the ones
making Debian, they're not.

JB.

- -- 
 Julien BLACHE [EMAIL PROTECTED]  |  Debian, because code matters more 
 Debian  GNU/Linux Developer|   http://www.debian.org
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNdWyzWFP1/XWUWkRAtq6AJ4rEvMYiyqPxtparXpR/Vc5k97uCQCeKDq3
hmh3/z4xGblMVgdPwWjQApQ=
=irCj
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

John Goerzen [EMAIL PROTECTED] wrote:

 Indeed.  I am one such user.  I have always felt fortunate that I don't
 have to really care what architecture a machine is, because of course
 it runs Debian.  I have run Debian on Alpha, x86, amd64, powerpc, and
 sarc systems, both as a desktop and a server environment on most of
 these.

 Here's a key point: the utility of Debian on x86 is greatly diminished,
 for me, if I can't run Debian on alpha (or arch x) also.

Seconded, of course. And look at the list of architectures planned for
Etch; looks like Debian's merging with Ubuntu soon.

One the one hand, we have the Ubuntu cabal at key positions in the
Project; on the other hand, we have Project Scud, which members are
currently employed by companies having interests in Debian.

I don't like that smell... at all.

 I feel sad today, I feel it is a sad day for the Project.

 I agree, and I too am quite sad that a number of DPL candidates signed
 off on this without asking hard questions about it or even putting it
 out for discussion and feedback first.

We need a serious kick applied ASAP, or this Project is going to
die. Let's kick.

JB.

- -- 
 Julien BLACHE [EMAIL PROTECTED]  |  Debian, because code matters more 
 Debian  GNU/Linux Developer|   http://www.debian.org
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNdaNzWFP1/XWUWkRAogWAJ9HBm1Oz69Qu4bHUmPeLaPiwf49WACgwHNQ
/nNcAsFzrnzXtCnA75kpgYo=
=rWGK
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Tollef Fog Heen
* Roland Mas 

| - Assuming porting teams haven't left in disgust, they still do their
|   porting jobs and submit bugs and patches.  These patches are very
|   likely to rot in the BTS, since the maintainer is now able to *say*
|   go away, you're not released, you don't interest me instead of
|   just thinking it and quietly ignoring them.  Where's the incentive?
|   The package migrates to testing anyway.

FWIW, most of the patches I've seen sent in by the AMD64 porting team
has been applied quickly, so even though the maintainer might have
been able to say «go away, you're not a port yet», they haven't.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Wouter Verhelst
Op ma, 14-03-2005 te 19:21 +0100, schreef David Schmitt:
 A current pbuilder chroot takes 121 MB (containing build-essential already).
 
 How long does a '(mv $chroot foo; rm -Rf foo  cp $stash $chroot)' take for 
 121 MB on $small-arch?

I'm guessing about half an hour. Didn't try it, though.

But that is not a sulution; it would only get you halfway. You'd also
need to ensure $stash is kept reasonably up-to-date (taking away
valuable CPU time from buildd), which opens you up to the same problems
you get in $chroot. Alternatively, one could do an apt-get upgrade
inside the chroot after copying $stash; that will likely take a few
hours (there's a reason why my m68k boxen are only slightly tracking
unstable).

The debootstrap itself is likely to take most of the day on an 68040
(which half our m68k buildd pool consists of); for reference, the last
time I installed the 25Mhz 68040 that quickstep.nixsys.be is (which,
BTW, is now the m68k experimental autobuilder), the installation process
was started somewhere around friday, and we were able to connect it to
wanna-build somewhere monday or tuesday (not exactly sure anymore). And
yes, I /did/ log in during the weekend.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Steve Langasek [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 09:09:09AM +0100, Adrian von Bidder wrote:
 On Monday 14 March 2005 05.45, Steve Langasek wrote:
  Architectures that are no longer being considered for stable releases
  are not going to be left out in the cold.  The SCC infrastructure is
  intended as a long-term option for these other architectures, and the
  ftpmasters also intend to provide porter teams with the option of
  releasing periodic (or not-so-periodic) per-architecture snapshots of
  unstable.

 [I'm a pure x86 user atm, so if this is a non-issue, I'll gladly be 
 educated]

 Why only unstable?  In other words: will it be possible for scc arches to 
 have a testing distribution?  Obviously, this testing/arch will not 
 influence the release candidate arch testing, but will allow real releases 
 of scc-arches if a RM/release team steps up.

 (A popular question...)

 There are a few problems with trying to run testing for architectures
 that aren't being kept in sync.  First, if they're not being kept in
 sync, it increases the number of matching source packages that we need
 to keep around (which, as has been discussed, is already a problem);

Why is that a problem?

The only problem mentioned (and nothing has been discussed) is mirror
bandwith/space limitations. If scc is split into a seperate archive
(seperate hostname and all) and is strictly voluntary it in no way
affects the size or mirrors of the main archs. The only thing affected
would be scc.d.o (go buy a bigger disk debian has money) and voluntary
scc mirrors.

I doubt those mirrors that deside to mirror scc.d.o (adding 10G source
+ 10G per arch) mind a few extra G for per arch testing sources. scc
mirrors can be partial and only mirror some archs or only testing or
only unstable. Let the mirror admins decide if they have the space.

 second, if you want to update using the testing scripts, you either have
 to run a separate copy of britney for each arch (time consuming,
 resource-intensive) or continue processing it as part of the main
 britney run (we already tread the line in terms of how many archs
 britney can handle, and using a single britney check for archs that
 aren't keeping up doesn't give very good results); and third, if
 failures on non-release archs are not release-critical bugs (which
 they're not), you don't have any sane measure of bugginess for britney
 to use in deciding which packages to keep out.

If scc is on scc.d.o that can be a seperate system from the main
archive. Lots of spare cpu time there to run it's own britney. As for
the RC bugs that indeed is a problem. I guess debbugs can be patched
to have more tags and packages can be tagged 'rc-mipsel' by the porter
team or maintainer. Reportbug could even add the tag automatically
when reporting a RC bug from an scc arch.

 For these reasons, I think the snapshotting approach is a better option,
 because it puts the package selection choices directly in the hands of
 the porters rather than trying to munge the existing testing scripts
 into something that will make reasonable package selections for you.

If porters can do a testing and stable tree on their own (i.e. 2
seperatly managed snapshots) all is fine. But then you have the extra
sources lying around again that you wanted to avoid.

 -- 
 Steve Langasek
 postmodern programmer

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 06:45:28PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 18:37, Goswin von Brederlow wrote:
 To highlight Steves most important sentence:
 | This point is *not* about supported architectures, only about
 | architectures carried by the primary mirror network.
 
 And if s390 only needs 100 downloads per year, the don't need to be 
 distributed on the mirrors, but can easily download from a central site.
 
 What a long ways to Yes, you are right.

Well, as long as the discussion is on dropping from the mirror network, yes,
you may be right, but the proposal is to drop from stable/testing altogether,
isn't it ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Tollef Fog Heen
* Sven Luther 

| So you mean that all the tier-2 arches should go and take over alioth as
| distribution media ? You read the answer of wiggy about this almost bringing
| alioth to his knees ? 

Please don't confuse «wanna-build access» and «distributed through the
normal (or SCC) mirror network».  Those are wholly separate issues and
even if you don't get your buildd «accepted by the empowered ones»,
you can still build and upload to the archive.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Uwe A. P. Wuerdinger
Stephen Gran schrieb:
This one time, at band camp, Ingo Juergensmann said:
On Mon, Mar 14, 2005 at 12:47:58PM +0100, Julien BLACHE wrote:

Moreover, the criterias given in your mail are just so oriented
towards/against some architectures, that it's a bad joke (I was going
to write disgusting, really).
It's a total change of direction: from as long as there are people who
care, we will release those arch to no matter if there are people who
care, we just release mainstream archs. :-(

No, I thought the proposal stated quite clearly, if there are users and
there are porters, a given arch is able to be included.  All that means
is that those interested will actually have to do some of the work to
support things like security and the kernel.  I know many of you already
do quite good work as porters, and I mean no offense.  But I can see
quite clearly that it would be difficult for the security team or other
groups to keep up with things growing the way they are.
We need a working d-i on $arch
We need working kernel on $arch
and we need security updates on $arch
Soliution any $arch debian releases has to add = 1 developer
to d-i, security and kernel team.
Anyting else was already sayed by someone else.
greets Uwe
--
Jetzt will man das Internet nicht einfach ein paar Leuten wie der IETF
überlassen, die wissen, was sie tun. Es ist zu wichtig geworden. - Scott 
Bradner
http://www.highspeed-firewall.de/adamantix/
http://www.x-tec.de



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thiemo Seufer
Goswin von Brederlow wrote:
[snip]
 A release for an scc arch could come weeks or month later than the
 main archs and be done by the porters alone. The only requirement for
 it would be that only stable sources can be used (with a few
 exceptions maybe for arch specific sources). Stable sources that don't
 build can't be in an scc release. E.g. no patching of kde to build on
 XYZ after the main archs have released, kde would have to be removed
 on that arch.

With that policy there's little reason for any delay left. Not even
due to buildd time, since a freeze would be needed before release
anyways.


Thiemo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: COUNT(buildd) IN (2,3) (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:54:32AM -0500, Stephen Gran wrote:
 This one time, at band camp, Sven Luther said:
  On Mon, Mar 14, 2005 at 05:03:30PM +0100, David Schmitt wrote:
   
   Thus the problem is less in the development and more in the support
   of testing requirements (all arches in sync) and stable support
   (security response time). Therefore the N=2 requirement is only
   needed for tier-1 arches but not for the tier-2 which will not
   officially release a stable.
  
  What is the detailed reasoning for this requirement anyway ? 
 
 I thought that was fairly clear - a 12 day build of a security fix is
 unacceptable, especially since it hampers getting that fix out the door
 for everyone else.

So what ? If those arches that are slower get security updates a bit later,
its better than getting no security updates at all.

Also, security updates often have a embargo time of a couple of weeks anyway,
so ...

  And would a ten-way redundant distcc cluster count as one machine ? 
 
 I would certainly interpret it that way, and hopefully the people behind
 the proposal would as well.

I seriously doubt it. 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thiemo Seufer
Goswin von Brederlow wrote:
 Tollef Fog Heen [EMAIL PROTECTED] writes:
 
  * Hamish Moffatt 
 
  | OK, that makes sense. Can you buy those architectures new? (Surely yes
  | in the case of s390 at least, probably mipsel also as the mips CPU
  | manufacturers are alive and well.)
 
  [EMAIL PROTECTED]:~# uname -a
  Linux eetha 2.4.29 #1 Fri Mar 4 02:35:42 EST 2005 mips unknown
 
  This was bought about a week ago; a linksys WRT54GS.
 
 Welcome to the club.
 
 Is this actualy a mips or mipsel system or even switchable? I'm still
 using mine with the original firmware.

It is mipsel. Uname -m on mipsel gives 'mips' (or 'mips64').


Thiemo


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread John Goerzen
On Mon, Mar 14, 2005 at 01:23:43PM -0500, David Nusinow wrote:
  Not necessarily. I imagine it such that the porters set up their own pull 
  from
  unstable, the same way amd64 does now. They can set up testing themselves
  (remember, dak is in the archive now) so they can run their own testing in
  parallel to the mainline one.
  What a huge waste of manpower, done seven times in a row.
 
 Hopefully not that huge, as the tools have already been written. Perhaps there
 can be a single package pool for all SCC/Tier-2 arches so it only has to be
 done once.

You know, the irony in all this is that the effort required to support
these SCC archs is greater than what would have been required to support
a non-free section outside of ftp.debian.org.  It seems that we find it
easier to remove Free software from our archive than non-free.

-- John


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Robert Millan [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 11:26:07AM +0100, Andreas Barth wrote:
 * Sven Luther ([EMAIL PROTECTED]) [050314 11:20]:
  On Mon, Mar 14, 2005 at 10:39:24AM +0100, Robert Millan wrote:
   On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
- the port must demonstrate that they have at least 50 users
 
   How do you demonstrate that?  Via popularity-contest?
  
  But then, popularity-contest installation per default was dropped for
  debian-installer rc3, so ...
 
 We don't say it needs 50 entries in popularity-contest, but just: 50
 users. How this is demonstrated, may be different. Enough traffic on the
 porter list might also be enough - or enough bug reports coming from
 that arch. Or whatever. I don't expect that to be the blocking critieria
 for any arch.

 I believe that kfreebsd-gnu matches all the mentioned requirements [1] except
 this one.  There are quite many bug reports for this system (hundreds) but
 most of them come from me ;).  If you could be more specific, that'd be much
 appreciated.

 [1] Of course, we don't have the DD signatures yet, but there are more than 5
 DDs working on this so there's no problem collecting them.

On that note I think amd64 fails the 5 DDs crtiteria. When we asked
for inclusion we had 1 DD working on amd64 and several NMs I think. I
think when we hit the 98% mark there were 2 DDs involved.

Another criterium amd64 fails is that packages must be DD build and
signed. That criterium seems realy silly. If the archive is not in
Debian but managed outside why should packages be exclusively DD build
and signed. Also when an arch is included only uploads of DD signed
debs are accepted so the criterium solves itself once an arch is
added.

MfG
Goswin



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Marc Haber
On Mon, 14 Mar 2005 19:11:01 +0100, Sven Luther
[EMAIL PROTECTED] wrote:
Well, it just calls for smarther mirroring tricks.

Do not expect mirror admins to run Debian, and to be willing to pull
smart mirroring tricks.

Greetings
Marc

-- 
-- !! No courtesy copies, please !! -
Marc Haber |Questions are the | Mailadresse im Header
Mannheim, Germany  | Beginning of Wisdom  | http://www.zugschlus.de/
Nordisch by Nature | Lt. Worf, TNG Rightful Heir | Fon: *49 621 72739834



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Wouter Verhelst
Op ma, 14-03-2005 te 19:15 +0100, schreef Sven Luther:
 so the buildd admin really examine all the packages for deviation that a
 compromised buildd could have incorporated before signing them ? Or that they
 scan the machine for a compromise and always detect them before signing ? 

Not really.

As you know, nothing gets uploaded to the archive without it having a
gpg signature by a key in the Debian gpg keyring. That goes for
autobuilt packages, too.

Also, I never sign stuff unless it gets through my filters and into the
right Maildir (and one of the things my filters check is the 'From'
address), so only the correct host will be able to upload.

Apart from that, I regularly log in to my buildd hosts, and check up on
them. If the host were compromised, I'd notice -- just as much as I'd
notice if anyone would compromise my firewall.

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Humberto Massa
Sven Luther wrote:
On Mon, Mar 14, 2005 at 03:52:54PM -0300, Humberto Massa wrote:
 

Sven Luther wrote:
   

Speaking of which, can anyone here explain to me why does a two-line 
security fix on, say, KDE, makes things need to be recompiled for 12 
days long? (!!!) One could think that there are more incremental ways of 
dealing with recompilation of enourmous packages.
 

   

Not currently supported, and not really considered as a technical
solution.
Friendly,
Sven Luther
 

Your answer was not really considered an answer to my question either :-)
My original question began with the word why...?. So I repeat, why 
does small things need days of recompilation? Moreover, why isn't 
   

They do not really, provided you keep about all the intermediate .o files of
the preceding build, depending on the security fix naturally.
 

My points are: (1) this is feasible/viable and (2) this would put times 
for tier-[23] arch builds in the same league (or at least compatible) 
with tier-1 arch builds.

 

incremental building supported? And finally, why isn't it considered a 
technical solution?
   

Because it is not needed for the fast tier1 arches ? 
 

This is a chicken-and-egg thing, isn't it? And it should be considered a 
*technical* solution, even if not a *political* one.

Friendly,
Sven Luther
 

Massa

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthew Garrett
Julien BLACHE [EMAIL PROTECTED] wrote:

 You'll figure out that the timing for this new policy is absolutely
 perfect; we're a week away from the voting period for the new DPL
 term. The current DPL can't (and won't, obviously) do anything about
 it, and the candidates signed the proposal.

I haven't signed the proposal. I'm undecided on the technical side of
things (I'd rather see a list of the problems that are being solved, and
a description of how these proposals fix those problems), and I think
the way the meeting and conclusions were announced was fairly
disasterous.

 I should add that the
 Vancouver meeting was announced at the very last minute, too. And I'm
 wondering, who paid for the travel expenses ? Did the people involved
 pay out of their own pocket ? Did the Project pay ? Did somebody else
 pay the bill ?

As mentioned in
http://lists.debian.org/debian-project/2005/03/msg00015.html , the
funding came from NUUGF. As far as I know, the project spent no money on
this.
   
-- 
Matthew Garrett | [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Hamish Moffatt [EMAIL PROTECTED] wrote:

 I guess the = 2 buildds requirement might be an issue for the embedded
 CPUs like mips as unstable continues to grow.

Should it become a problem for MIPS, then I have a MIPS machine
sitting on my desk. I actually bought it for that exact purpose 2
years ago: an additional buildd. I bought a disk for that machine,
out of my pocket.

I never proposed the machine as an additional buildd, because, at the
exact same time, the discussions about how well the buildd maintainers
handled the load just started, and Ryan Murray clearly stated that he
did not need additional buildd power for MIPS. Although the
architecture was falling behind, with a couple of buildds down.

This machine is still available, it's in perfect working condition,
alive and kickin'. (Indigo2 R4400 SC w/256 MB of usable RAM)

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNd8nzWFP1/XWUWkRAgq6AKCWYU1reo8JAVWFeTSiYcu2VY6L7QCgpLHD
gOG1r4WLt80znO55oEyBVSI=
=sX08
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Joel Aelwyn
On Mon, Mar 14, 2005 at 03:32:40PM +0100, Jeroen van Wolffelaar wrote:
 
 Note that porter patches for kFreeBSD and amd64 so far seem, as far as I
 can see, to be relatively swiftly applied anyway by maintainers, despite
 those patches not being RC either. This suggests to me that also in the
 future with patches for SCC architectures, this should normally not be
 a problem, and of course, NMU's are possible otherwise.

Speaking as a patch-supplier for netbsd-i386 (Nienna), my experience has
been that many (even most) maintainers that are not simply asleep at
the wheel are willing to accept patches as long as they're fairly clear
in purpose and not too extensive. For most non-core packages, it can
be as simple as need to grab a sanely recent autotools-dev, please
re-libtoolize with mumble or higher, etc.

Core packages tend to make a lot more assumptions and be a lot less
portable, simply due to their nature (they tend to be meaningless on
non-Debian systems, which means frequently nobody has ever thought much
about a non-Linux, non-Glibc system). Even most of these are fairly
reasonable, though I've had a couple of flat refusals and a couple of
rather impressively long delays (no, I'm not going to name names, go look
it up in the BTS if you care).

I generally file FTBFS as 'wishlist' if it's a new package, and 'important'
if it previously built on the architecture and is now broken. It is fairly
rare for the latter to get downgraded, simply because it usually indicates
a fairly major issue that may apply elsewhere as well, and if a maintainer
is awake and friendly enough to porting that it has been built in the first
place, they tend to be willing to consider a regression to be important.
-- 
Joel Aelwyn [EMAIL PROTECTED]   ,''`.
 : :' :
 `. `'
   `-


signature.asc
Description: Digital signature


Re: COUNT(buildd) IN (2,3) (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Stephen Gran [EMAIL PROTECTED] wrote:

  Thus the problem is less in the development and more in the support
  of testing requirements (all arches in sync) and stable support
  (security response time). Therefore the N=2 requirement is only
  needed for tier-1 arches but not for the tier-2 which will not
  officially release a stable.
 
 What is the detailed reasoning for this requirement anyway ? 

 I thought that was fairly clear - a 12 day build of a security fix is
 unacceptable, especially since it hampers getting that fix out the door
 for everyone else.

Then we have to adjust our security support policy. Define Tier-1
archs for security support, release updates for them first. Then for
the others. I fail to see how this could be a problem.

Some people here are looking for problems, when they should be looking
for solutions. Please stop thinking backward.

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNeD0zWFP1/XWUWkRAl4UAKCjtZDbcu8VjqotuD3aTsQnDVIlrgCeNgkZ
O0UQXl6n+bmUNIOr/wCEzJ8=
=I9vA
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Andreas Barth [EMAIL PROTECTED] writes:

 * Tollef Fog Heen ([EMAIL PROTECTED]) [050314 10:55]:
 * Steve Langasek 
 
 | If you are planning any other transitions that will affect a lot of
 | packages, please let us know in advance.  We will need to complete the
 | larger transitions as fast as possible, to get testing back into a
 | nearly releasable state quickly again after the release.

 Multiarch.

 I have yet to see a proposal how to do multiarch in the right way.
 This might be partly related to the fact that I don't follow all lists
 around, but well - this needs to be done.

Proposal is on Tollefs page, let him repeat the url.

 However, given the timeframe, I seriously doubt that we can do multiarch
 in time for etch.

We already have had a fully working toolchain and apt/dpkg with
multiarch support all ready and most of the essential packages patched
to multiarch as well. All of it now has bitrot but Tollef is writing
his thesis on it and is actively working on updating multiarch to the
latest versions and ideas.

Note that multiarch is a package by package thing!

To function only dpkg/apt/aptitude/dselect/... need to be made aware
(apt, dpkg patches available) and libs one doesn't have to Depend on
needs to be ported.

To build for multiple archs on a single host the toolchain has to be
ported (Tollef is working on them first) [this is not required but
important to users, blody anoying not to have].

For the system to be usable to general users all essential libs have
to be ported and as many important/standard libs as possible till
etch. Porting a lib means mostly splitting the packages into all and
any files to allow multiple archs to be coinstalled.

The amount of libs also depends on the arch. For i386/amd64 a lot of
libs are needed to run e.g. galeon 64bit and OOo 32bit. For ppc,
sparc, mips, mipsel and s390 there is probably no point in having
gnome 64bit as it is slower. I can very well see !amd64 multiarch
archs having a limited package list restricting it to packages
benefiting from 64bit address space like mysql/postgresql.


On the timeframe issue:

If it weren't for sarge blocking us we would have submitted multiarch
patches as early as one year ago. Should we start submitting / NMUing
them for _experimental_ now to get this change running and tested? Or
should we keep waiting for sarge getting released and then massfile
them the next week?

Also note that multiarch and non multiarch packages are fully
compatible both ways. Multiarch works on old dpkg (with just one arch)
and non multiarch still works with multiarch dpkg/apt (but limiting
what dpkg lets you install multiarch as old debs conflict). If we add
multiarch patches to etch but then don't do multiarch all we end up
with are some packages split finer than now.

 Cheers,
 Andi

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Steve Langasek
On Mon, Mar 14, 2005 at 12:27:25PM +0100, Ingo Juergensmann wrote:

  To me this decision sounds like a very good idea. Catering to some very
  specialised architectures can be good, but should not be a great burden on
  the total project. Trying to include everything in one big distribution is
  inherently not working (as has been shown with sarge). It is very well
  possible to maintain high quality ports of Debian, and infrastructure is
  provided for that, without making the release dependant on it.

 But the number of archs is not that huge problem that some people want to
 make us believe. 
 I think the main problem is the general size of the distribution in number
 of packages. You can't get 10.000 packages into a stable shape for a
 release, quite simple.

Can't you?

Removing unimportant buggy packages from testing is *easy* -- much
easier than trying to craft guidelines for declaring a set of core
packages.  Getting all of the packages that are considered too important
to release without is *hard*.  Hand-holding an RC bugfix to make sure it
gets built on all 11 architectures is much, much harder.

I know which of these tasks has been eating more of my time for sarge.
Do you?

I'm all about portability, but I also believe it's important in terms of
our release cycle length for the release team to be in a position to set
release standards for architectures, much stricter than the ones we have
in place now.  This proposal represents our first stab at this.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthias Urlichs
Hi, Goswin von Brederlow wrote:

 Another criterium amd64 fails is that packages must be DD build and
 signed. That criterium seems realy silly. If the archive is not in Debian
 but managed outside why should packages be exclusively DD build and
 signed.

Well, if the archive isn't in Debian then the signing is obviously not
required, but if it shall be *imported* into Debian as a new $arch, then
you need either rebuild everything, or trust the person who signed the
binaries -- i.e., you need a DD.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthias Urlichs
Hi, Goswin von Brederlow wrote:

 If scc is split into a seperate archive
 (seperate hostname and all) and is strictly voluntary it in no way affects
 the size or mirrors of the main archs.

As I understand it, SCC *binaries* get their own domain / mirrors /
everything, but the *source* shall be shared with the main archive.

I assume that storing the source twice is more work for the toolchain
people, and it's obviously wasted disk space.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Stephen Gran [EMAIL PROTECTED] wrote:

 Well, under this proposal, even if people do a lot of good work, they
 could be relegated to SCC for non-technical economic reasons (such as
 no new hardware being sold), and thus doomed to a slow, painful death
 in Debian.

 If you hadn't snipped the earlier part of my message where I said I
 thought this was intended to ensure that replacement parts would be
 available, then perhaps you would have had less to be upset with me
 about.

Please understand that getting replacement parts won't be a problem
for at least a dozen years. It's really simple to understand.

There are newcomers to Linux and Debian everyday. This isn't going to
stop. Those people have potentially old hardware stored somewhere for
some reasons (they're called geeks, or nerds sometimes).

You'll always find somebody willing to donate hardware to Debian. This
is a fact; it happened before. (and we've still got EBay -- the
Project bought replacement parts off of EBay already in the past).

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNeYozWFP1/XWUWkRAvH0AJ0blnAlpfR7XxgrM2ibAayieqveJACgqc4T
HgsYvAKuwKwHTENdrjFFRSc=
=I5ai
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Edge and multi-arch (was Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Steve Langasek
Hi Martin,

On Mon, Mar 14, 2005 at 12:18:33PM +, Martin Michlmayr wrote:
 * Tollef Fog Heen [EMAIL PROTECTED] [2005-03-14 13:10]:
  | I have yet to see a proposal how to do multiarch in the right way.
  What is lacking in the proposals out there?

 The following is what I (as DPL) sent to the release people in January
 to get them to discuss these issues.  I didn't post this to a list
 because what I wrote is kinda rough and I wanted the release people to
 clarify and post it.  Since this hasn't happened yet, I might just as
 well post my original message.  But please note that some important
 things might be missing in it.

 Basically, there has been a lot of discussions about multi-arch and
 some people seem to think that after sarge we'll _obviously_ move to
 multi-arch.  Well, this is not so obvious to me.  In particular, I see
 no consensus among ftpmaster/archive people, release people, toolchain
 people, porters, and basically everyone else that this is the way to
 go.  If we decide to go with multi-arch, we need:

   - agreement of all these people
   - a _clear_ plan about this migration (and have this plan before
 sarge is out), including a clear timeplan (announcement on day X,
 maintainers have Y months to upload, if they don't do it in Y
 months, we'll have a time of Z people who'll NMU the packages by
 G).
   - a proof of concept (this may exist already)
   - agreement with some upstream LSB people that it's a good idea for
 Debian to pioneer this in the hope that others will follow suite
 (rather than a way of Debian to make itself incompatible with
 the rest of the world).  [Chris Yeoh and taggart are the people
 to talk to.]

 There may be a few other things missing, but basically the multi-arch
 people have to show a clear plan _now_ how and why this migration is
 supposed to happen.

Yes, although nothing's made it to the list yet about this (was going to
start a separate thread for that this week, actually), I have had
conversations with Tollef, Matt (Taggart) and the ftp-masters about
multiarch to get a handle on what the issues are.

It seems that we do have a basic proof-of-concept (Tollef's link), but
neither the LSB folks nor the ftp-masters are sold on the idea yet; the
ftp-masters seem to think there have been too many, mutually
incompatible proposals floating around.  Having basic support for
multiarch in glibc/dpkg/toolchain seems sound, but actually using it in
Debian packages for etch seems to hinge on the other concerns above.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Sven Luther [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 10:16:20AM +, Martin Michlmayr wrote:
 * Aurélien Jarno [EMAIL PROTECTED] [2005-03-14 10:56]:
  Would it be possible to have a list of such proposed architectures?
 
 amd64, s390z, powerpc64, netbsd-i386 and other variants, sh3/sh4, m32r

 ppc64 is not currently a candidate for a separate arch, the path to go is a
 biarch solution, much like the current amd64 solution in sarge, as word from
 the main ppc64 developers is that a pure64 solution will be a measurable
 performance hit on power (unlike amd64, which is saddled with a weaker
 instruction set to start with).

 One could add per-subarch optimized builds and mirrors too though. 

 Friendly,

 Sven Luther

The way to go is to have a seperate architecture but with a very
limited set of packages. This allows packages to keep the same name
instead of prefixing with the bit with (libc6 instead of lib64c6,
zlib instead of lib64z). This also hides the 64bit packages for users
of ppc32 as their setup would not include the ppc64 architecture.

Multiarch porting needed for i386/amd64 needs every package compiled
for both. People do want pure 64bit amd64 installs. 3rd party
software for i386/amd64 needs a lot of libs, much more than ppc64
would need, and doing those the current biarch way is insane compared
to multiarch. It would also duplicate all those packages as 32bit
amd64 and 64bit i386 package.

And once multiarch is in the source ppc64 just has to start the
buildd. Doing biarch for ppc on top of that would be stupid.


So my suggestion for ppc64 (and all other !amd64 multiarchs) is to
either add lots of packages to p-a-s (more sensible would be a white
list), add lots of packages to NFU in wanna-build or provide a large
no-auto list to the buildds to keep out unneccesary packages and treat
it just like any other arch otherwise.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthias Urlichs
Hi, Julien BLACHE wrote:

 One the one hand, we have the Ubuntu cabal at key positions in the
 Project; on the other hand, we have Project Scud, which members are
 currently employed by companies having interests in Debian.
 
On the other hand, at least these people are employed working on
Debian-related stuff, and some of them are allowed to spend some/all of
their work time on it. IMHO that's a Good Thing.

 I don't like that smell... at all.

I don't think it smells. Do you have any evidence that it does? I don't.

In fact I'd suggest that unless you do have (and present) such evidence,
you should refrain from such statements. 

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Call for help / release criteria (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

David Schmitt [EMAIL PROTECTED] wrote:

  - First of all, we should take the details as a starting point for
discussion, not as a decision that has made.  Nevertheless, we must
take into account that there are reasons for it: The people doing the
release, ftpmaster, etc. work noticed that they cannot go on like
before.

 Why they don't ask for help?

 They do so now. Are you (all) prepared to take up the call?

Yes, we are. There are enough interested people here to replace the
current people in charge.

We need some fresh blood now, everybody should realize it.

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNerJzWFP1/XWUWkRAkiIAKCVqq1QR1sm9EenW2438yDhJYMJlACg2gWZ
fR2fWroHAjSd8hqeThDN8rg=
=id40
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thomas Bushnell BSG
Steve Langasek [EMAIL PROTECTED] writes:

 Removing unimportant buggy packages from testing is *easy* -- much
 easier than trying to craft guidelines for declaring a set of core
 packages.  Getting all of the packages that are considered too important
 to release without is *hard*.  Hand-holding an RC bugfix to make sure it
 gets built on all 11 architectures is much, much harder.

I have long thought that part of the problem here is that the release
team can directly solve some problems, but others it is always forced
to wait for someone else to solve them.

Release team can NMU bug fixes, and can push packages into testing
before the strict criteria are met, and can exclude packages from
testing by filing appropriate RC bugs.

Release team cannot adjust wanna-build priorities, remove packages
that are blocking bug fixes, or include new packages into the archive
(the most frequent problem being soname bumps on libraries), without
asking someone else to do it for them.

I think we could immediately solve a jillion problems if the release
team had more plenary ability to solve problems itself, directly,
rather than being forced to wait for someone else to solve some of
them.

Thomas


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthew Garrett
Matthias Urlichs [EMAIL PROTECTED] wrote:
 Hi, Goswin von Brederlow wrote:
 
 If scc is split into a seperate archive
 (seperate hostname and all) and is strictly voluntary it in no way affects
 the size or mirrors of the main archs.
 
 As I understand it, SCC *binaries* get their own domain / mirrors /
 everything, but the *source* shall be shared with the main archive.

Uh. Not if you want to distribute any GPLed material.

-- 
Matthew Garrett | [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Alastair McKinstry
On Luan, 2005-03-14 at 16:16 +0100, David Schmitt wrote:
 On Monday 14 March 2005 12:05, Robert Lemmen wrote:
  - there must be a way for a scc arch to get a stable release. why don't
we either keep testing for scc archs but not do releases, so the
porters can do their own stable releases of their arch or have
per-arch testing? (the latter might lead to a source package explosion
i think)
 
 AFAI can tell, anybody can host an archive of packages built from stable 
 sources for a scc or unofficial port. And - if I read the conditions on 
 becoming a fully supported Debian arch right - then having security support 
 for an external pool of this arch is a good indicator that it should be a 
 fully supported stable release (amongst other things).

The plan as proposed is that the Debian scc ports are purely builds of
unstable. Hence this build out of the last release (e.g. etch) becomes a
subproject of a second-class project of Debian. It effectively has
little credibility.


 If on the other hand nobody can be found to recompile packages after DSAs are 
 released for this arch, I believe the arch shouldn't be released for Debian 
 as stable.

This is the important part. The point of a release is that 
(1) it works
(2) A promise that it _will be maintained_ ; that there will be security
fixes, and eventually an upgrade path.

For people to use a Debian port as a distribution they must have a
degree of faith that there will be people willing and capable of doing
timely security fixes for its lifetime: the size of the Debian Project
promises that. While an individual would probably be able to take a
working etch release and do the necessary fixes for e.g. s390 to
release, more is required for people to place their trust in it as a
server distribution.

Then, what happens at End of Life? We must demonstrate that we also have
a plan to transition from etch to what comes next for SCCs as well as
i386; that s390 etch upgrades to s390 etch+1.

In essence I agree with your proposition, though: I think we could
release e.g. four architectures as Tier-1, with other architectures
following later: these would involve some version skew, but kept
minimal, so that 's390 etch' is etch+only necessary fixes. The degree of
version skew would be reasonable and such that the s390 port team could
keep s390 Debian maintained with security fixes.

The challenge, however, is to do this _within Debian_, so that
maintainers will apply patches to their packages that are required for
minor ports, and keep version skew and hence complexity to a minimum.
Hence we should do what we can to keep minor ports _within_ debian, even
mostly symbolic gestures such as keeping Debian SCC ports in 
ftp.debian.org/ports rather than 'offsite' in scc.debian.org, etc.
(Note: I agree that mirrors should be capable of doing partial mirrors,
rather than having to mirror the whole of ftp.debian.org, though).

Regards
Alastair McKinstry



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Andres Salomon [EMAIL PROTECTED] writes:
 Additionally, they are being excluded from having access to important
 resources, and the possibility of filing RC bugs which is the only way
 to get lazy maintainers moving is being taken away.
 

 That's an awfully pessimistic view.  All porters need is some sort of
 leverage that allows them to force maintainers to accept or deal w/
 their patches; perhaps some QA team members who will NMU
 poorly-maintained packages on behalf of porters?  The amd64 crew seems to
 be getting along ok w/out having their FTBFS bugs considered RC..

I wouldn't call it getting along. It was a struggel all last year.

Do I have to remind you of the dpkg saga? Patches by the amd64 team
were ignored for ages, then added to cvs, replaced for the unstable
upload, the (one of them) maintainer then requested every amd64 porter
to sign a mail to revert the patch to what was submitted, after that
he still refused to revert the patch and the CTTE had to be called to
resolve the issue which it did in favour of the amd64 team.

Or look in the BTS for the still open amd64 bugs with patches ranging
100, 200 and I think even over 300 days back. Some packages got over 5
uploads to unstable with the amd64 patch Add amd64 to Architecture
line in debian/control or update configure script being ignored.

As it is we can't even build CDs from Debian sources without someone
NMUing syslinux for us first.

It is true that many people were very helpfull but also some very
stubborn and some people just didn't care. The syslinux thingy is
one of 5 issues left that require a patch in sarge. Everything else
has been either fixed during the last year or will (and can without
too loud screams) be excluded.

Special thanks go to the glibc and gcc maintainers for allowing
multiarch support into sarge enabling 64bit kernels for sarge i386 and
32bit support for amd64.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthias Urlichs
Hi, Humberto Massa wrote:

They do not really, provided you keep about all the intermediate .o files of
the preceding build, depending on the security fix naturally.

 My points are: (1) this is feasible/viable

No it isn't. (a) the disk space requirements are humungous. (b) currently
we do not verify that rebuilding without an intervening make clean
works, i.e. that all dependencies are set up correctly. In fact, Policy
mandates the make clean between builds. Therefore (c) you can't assume
that your fixed source will result in fixed binaries. This is a security
fix, so it's especially important that the binaries get fixed.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Goswin von Brederlow
Jeroen van Wolffelaar [EMAIL PROTECTED] writes:

 On Mon, Mar 14, 2005 at 09:19:27AM -0500, Andres Salomon wrote:
 On Mon, 14 Mar 2005 15:15:16 +0100, Marc Haber wrote:
  Additionally, they are being excluded from having access to important
  resources, and the possibility of filing RC bugs which is the only way
  to get lazy maintainers moving is being taken away.
  
 
 That's an awfully pessimistic view.  All porters need is some sort of
 leverage that allows them to force maintainers to accept or deal w/
 their patches; perhaps some QA team members who will NMU
 poorly-maintained packages on behalf of porters?  The amd64 crew seems to
 be getting along ok w/out having their FTBFS bugs considered RC..

 The developers reference has a significant piece of text about porting.
 It includes NMU possibilities (NMU's are *always* a possibility to work
 around lazy maintainers, recall that the Social Contract explicitely
 mentions you can never demand work from anyone).

 http://www.debian.org/doc/developers-reference/ch-pkgs.en.html#s-porting

 Note that porter patches for kFreeBSD and amd64 so far seem, as far as I
 can see, to be relatively swiftly applied anyway by maintainers, despite
 those patches not being RC either. This suggests to me that also in the
 future with patches for SCC architectures, this should normally not be
 a problem, and of course, NMU's are possible otherwise.

 --Jeroen

Looking just at the ones I reported:

http://bugs.debian.org/cgi-bin/pkgreport.cgi?which=submitterdata=brederlo%40informatik.uni-tuebingen.dearchive=no

#249397: FTBFS: amd64 missing in Architecture list
Package: mga-vid; Severity: important; Reported by: Goswin Brederlow [EMAIL 
PROTECTED]; Tags: patch; 301 days old.

# #249440: inetutils: Wrong Priorities and Sections in debain/control break 
debian-amd64
Package: inetutils; Severity: important; Reported by: Goswin Brederlow [EMAIL 
PROTECTED]; merged with #205487, #26, #290700; 301 days old.

# #251765: FTBFS: missing amd64 support
Package: scm; Severity: important; Reported by: Goswin von Brederlow [EMAIL 
PROTECTED]; Tags: patch; 288 days old.

# #252760: FTBFS: architecture missing
Package: mkrboot; Severity: important; Reported by: Goswin von Brederlow 
[EMAIL PROTECTED]; 282 days old.

# #252771: FTBFS: wrong architecture
Package: bsign; Severity: important; Reported by: Goswin von Brederlow [EMAIL 
PROTECTED]; 282 days old.

# #254089: FTBFS: test for res_mkquery is broken
Package: mtr; Severity: important; Reported by: Goswin von Brederlow [EMAIL 
PROTECTED]; Tags: patch; 274 days old.

# #255725: FTBFS: amd64 needs Build-Depends svgalig1-dev too
Package: cthugha; Severity: important; Reported by: Goswin von Brederlow 
[EMAIL PROTECTED]; 265 days old.



That suggests that FTBFS bugs for SCC archs will be ignored just as
long, 1/2 - 3/4 of the planed release cycle. Now imagine a bug in fsck
that destroys data being left open for so long.


SCC will become useless real quick unless porting NMUs remain allowed
after 7 days as it is now and unless portes exercsise those NMus more
often.

MfG
Goswin


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthew Garrett [EMAIL PROTECTED] wrote:

 You'll figure out that the timing for this new policy is absolutely
 perfect; we're a week away from the voting period for the new DPL
 term. The current DPL can't (and won't, obviously) do anything about
 it, and the candidates signed the proposal.

 I haven't signed the proposal. I'm undecided on the technical side of

Sorry, I simplified the statement a bit.

 things (I'd rather see a list of the problems that are being solved, and
 a description of how these proposals fix those problems), and I think
 the way the meeting and conclusions were announced was fairly
 disasterous.

That's the very least we can say about it, yes.

 I should add that the
 Vancouver meeting was announced at the very last minute, too. And I'm
 wondering, who paid for the travel expenses ? Did the people involved
 pay out of their own pocket ? Did the Project pay ? Did somebody else
 pay the bill ?

 As mentioned in
 http://lists.debian.org/debian-project/2005/03/msg00015.html , the
 funding came from NUUGF. As far as I know, the project spent no money on
 this.

For once, it was actually documented. I read the announcement quickly
when I received it, and didn't remember that bit. Thanks for pointing
out.

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNfIlzWFP1/XWUWkRAlGUAJ9Onrpiaf+7z7kcG1lzhp3A0Ed+GwCgjKeC
PsUAJk/QQLZyOnytjPMnpto=
=MGgW
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Julien BLACHE
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Matthias Urlichs [EMAIL PROTECTED] wrote:

 One the one hand, we have the Ubuntu cabal at key positions in the
 Project; on the other hand, we have Project Scud, which members are
 currently employed by companies having interests in Debian.

 On the other hand, at least these people are employed working on
 Debian-related stuff, and some of them are allowed to spend some/all of
 their work time on it. IMHO that's a Good Thing.

Sure that's good. It stops to be that good when they're obviously
trying hard to impose their employer's agenda on the Project.

 I don't like that smell... at all.

 I don't think it smells. Do you have any evidence that it does? I don't.

I haven't, but maybe I'll have some soon. But then, everybody will see
it, and it will be to late.

 In fact I'd suggest that unless you do have (and present) such evidence,
 you should refrain from such statements. 

I don't like to make such statements, believe me. Fact is, the Project
is getting out of control of its members, to the benefits of a
few. That's not how it's supposed to work.

JB.

- -- 
 Julien BLACHE - Debian  GNU/Linux Developer - [EMAIL PROTECTED] 
 
 Public key available on http://www.jblache.org - KeyID: F5D6 5169 
 GPG Fingerprint : 935A 79F1 C8B3 3521 FD62 7CC7 CD61 4FD7 F5D6 5169 
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)
Comment: Processed by Mailcrypt 3.5.8 http://mailcrypt.sourceforge.net/

iD8DBQFCNfMdzWFP1/XWUWkRArbDAKDJsj+qMxVKxtYHcHYJ1Qjn4wxWrQCfR9vD
KsmmonG7+bLt981cNj149kY=
=kPlQ
-END PGP SIGNATURE-


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Steve Langasek
On Mon, Mar 14, 2005 at 02:23:36PM +, Matthew Garrett wrote:
 Andreas Schuldei [EMAIL PROTECTED] wrote:
  On Mon, Mar 14, 2005 at 11:58:06AM +, Matthew Garrett wrote:
  Uhm. You knew that conclusions from that meeting would be likely to
  contradict the answers from other DPL candidates, but you did nothing to
  make them aware of this before they had those answers published to a
  large audience?

  I knew nothing about the candiates' answers, no.

  I did not use the knowledge to my own advantage, either.

 How is using that knowledge to sidestep the question not using it to
 your own advantage? We have a situation where several DPL candidates
 have voiced support for the release team, only to have an announcement
 three days later that flatly contradicts them. Would it not have been
 better for Debian if you'd told the other candidates what decision had
 been reached?

Correct me if I'm wrong, but I think the timeline went something like:

- Sunday evening, meeting adjourns
- Monday noon-ish, first real draft of the meeting report posted
- Tuesday morning, everyone who needs to review it gets on a plane
- Tuesday evening, DPL candidates have their deadline for responding to
  LWN interview questions

(all times UTC-0800)

I hope you'll be somewhat forgiving of the people involved for the
unfortunate case of timing at work here.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread David Nusinow
On Mon, Mar 14, 2005 at 09:25:02PM +0100, Julien BLACHE wrote:
 Sure that's good. It stops to be that good when they're obviously
 trying hard to impose their employer's agenda on the Project.

Sarge was already very late before Ubuntu existed. Our mirror network was
already strained before Ubuntu existed. Our release team was struggling to get
sarge out before Ubuntu existed. Our security team was already undermanned
before Ubuntu existed. d-i was short on contributers and had a hard time
releasing before Ubuntu existed.

We have only ourselves to blame for where we're at now.

 - David Nusinow


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Moritz Muehlenhoff
Matthew Garrett wrote:
 As I understand it, SCC *binaries* get their own domain / mirrors /
 everything, but the *source* shall be shared with the main archive.

 Uh. Not if you want to distribute any GPLed material.

The FSF doesn't consider this a problem:
http://www.gnu.org/licenses/gpl-faq.html#TOCSourceAndBinaryOnDifferentSites

Cheers,
Moritz


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Matthew Garrett
On Mon, 2005-03-14 at 21:49 +0100, Moritz Muehlenhoff wrote:
 Matthew Garrett wrote:
  As I understand it, SCC *binaries* get their own domain / mirrors /
  everything, but the *source* shall be shared with the main archive.
 
  Uh. Not if you want to distribute any GPLed material.
 
 The FSF doesn't consider this a problem:
 http://www.gnu.org/licenses/gpl-faq.html#TOCSourceAndBinaryOnDifferentSites

It's not clear that an FTP site really satisfies that, and it's also the
case that this is the FSF's interpretation rather than being the one
that all GPL copyright holders hold. I'd worry that we might fall foul
of some (seemingly valid) GPL interpretations.

-- 
Matthew Garrett | [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Joerg Jaspert
Sven Luther wrote:

 In fact I strongly suggest switching to source-only after Sarge is
 released.
seconded, and ubuntu has proven that it is possible.

Ubuntu this, ubuntu that, ubuntu there, ...

EH, just because ubuntu did it its good? Then why a no to the drop
other arches - ubuntu only has 3 IIRC. It must be good.

Just because Ubuntu did something and it may work for them doesnt mean
its good for Debian.

-- 
bye Joerg


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Scott James Remnant
On Mon, 2005-03-14 at 21:25 +0100, Julien BLACHE wrote:

 Matthias Urlichs [EMAIL PROTECTED] wrote:
 
  One the one hand, we have the Ubuntu cabal at key positions in the
  Project; on the other hand, we have Project Scud, which members are
  currently employed by companies having interests in Debian.
 
  On the other hand, at least these people are employed working on
  Debian-related stuff, and some of them are allowed to spend some/all of
  their work time on it. IMHO that's a Good Thing.
 
 Sure that's good. It stops to be that good when they're obviously
 trying hard to impose their employer's agenda on the Project.
 
There's no particular reason for Ubuntu developers to try and impose
Canonical's agenda on Debian; we have our own distro for (and because we
have) our own agenda.

We only release Ubuntu with three architectures, it doesn't gain or lose
us anything if Debian releases with one, three or thirteen
architectures.

Scott
-- 
Have you ever, ever felt like this?
Had strange things happen?  Are you going round the twist?


signature.asc
Description: This is a digitally signed message part


Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Aurelien Jarno
On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
 - the release architecture must have N+1 buildds where N is the number
   required to keep up with the volume of uploaded packages
 
 We project that applying these rules for etch will reduce the set of
 candidate architectures from 11 to approximately 4 (i386, powerpc, ia64
 and amd64 -- which will be added after sarge's release when mirror space
 is freed up by moving the other architectures to scc.debian.org).

AFAIK, all the 4 architectures listed here only have one buildd. That
lets me more and more thinking that the list of arches has been choosen
before the criterions. And no check have been done later...


-- 
  .''`.  Aurelien Jarno | GPG: 1024D/F1BCDB73
 : :' :  Debian GNU/Linux developer | Electrical Engineer
 `. `'   [EMAIL PROTECTED] | [EMAIL PROTECTED]
   `-people.debian.org/~aurel32 | www.aurel32.net


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Bdale Garbee
[EMAIL PROTECTED] (David Schmitt) writes:

 On Monday 14 March 2005 11:10, Rene Engelhard wrote:
 pcc is barely at 98%. I don't think that barrier should be that high. We
 *should* at last release with the tree most important archs: i386, amd64,
 powerpc.

 Please, 98% is not high. It is just a call to porters to get their act 
 together.

When I was actively running the ia64 and hppa buildd systems, which I am not
doing today, it was my observation that completion percentage was noisy and
depended a lot on what was happening in the upload queue on a given day.  So,
while I agree that 98% seems like a reasonable goal, there are some things I
think it is worth keeping in mind if you try to look at the raw numbers.

The number of uploads per day varies a lot.  Some days it's fairly quiet, some
days it's not... and a really productive bug-squash weekend can run the upload
rate *way* up and depress the completion percentages a bit until everyone
catches up.

If there's a buildability problem in an upload that's at the base of a build 
dependency tree, the completion percentage can droop quite a bit until that 
issue gets resolved, then it may bounce back up more or less immediately when 
it gets fixed.  If you happen to look at the numbers in the middle of one of
these events, you'll get unexpected results.

A transient infrastructure problem at a sensitive time of day can cause a
droop in completion percentage because of email being backed up, etc.  Easily
enough to cross any magic threshold depending on which day you look.

The admins of the various buildds have different patterns of behavior in terms
of how often they review and sign uploads.  If they're on it all the time day
and night, the visible completion percentage is likely to be higher than if
they do this once a day... and the time of day relative to when katie runs
can make a difference, too!

My point is that I would expect any rational application of this sort of 
criteria to look at more than a single instant in time...  I don't think it
is very useful to fret over the noise on the signal day to day.  Some
buildd admins treat this as a real game, and work hard to be at the top of 
completion graphs every day... some don't.  

The real question on the day of release is what the build percentage of 
'testing' is for each architecture, and that's a pretty easy place to drive 
the numbers near or to 100% if we think it's important enough!

Bdale


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 08:07:03PM +0100, Wouter Verhelst wrote:
 Op ma, 14-03-2005 te 19:15 +0100, schreef Sven Luther:
  so the buildd admin really examine all the packages for deviation that a
  compromised buildd could have incorporated before signing them ? Or that 
  they
  scan the machine for a compromise and always detect them before signing ? 
 
 Not really.
 
 As you know, nothing gets uploaded to the archive without it having a
 gpg signature by a key in the Debian gpg keyring. That goes for
 autobuilt packages, too.
 
 Also, I never sign stuff unless it gets through my filters and into the
 right Maildir (and one of the things my filters check is the 'From'
 address), so only the correct host will be able to upload.
 
 Apart from that, I regularly log in to my buildd hosts, and check up on
 them. If the host were compromised, I'd notice -- just as much as I'd
 notice if anyone would compromise my firewall.

But you would notice all this just the same if the signing where automated,
don't you ? None of the procedures above would allow you to discover a package
built on a compromised buildd in a better way than if it was auto-signed.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Brian M. Carlson
On Sun, 2005-03-13 at 20:45 -0800, Steve Langasek wrote:
 Architectures that are no longer being considered for stable releases
 are not going to be left out in the cold.

I disagree. I feel that maintainers are going to ignore the SCC
architectures for the purposes of portability bugs and security fixes.

 - binary packages must be built from the unmodified Debian source
   (required, among other reasons, for license compliance)

This is a problem. No one will fix the portability bugs that plague, for
example, sparc (memory alignment SIGBUS) without them being severity
serious.

Therefore, I would support this plan *iff* it were stated that
portability bugs were still severity serious (I would not object to an
etch-ignore tag for the purpose of stating that they are irrelevant to
the release), that security bugs were still severity grave and critical
(again etch-ignore would be okay), and that maintainers actually have to
fix such bugs, or their packages could be pulled from the archive as too
buggy to support.

For the record, I own more sparc machines than any other single
architecture, and I am not pleased about this plan.

-- 
($_,$a)=split/\t/,join'',map{unpack'u',$_}DATA;eval$a;print;__DATA__
M961H[EMAIL PROTECTED];!UF%OG-U(#QUF%OG-U0=D:75MUC8VUL=G)U;6LN
MFUL+F=Y/@H)2QA8F-D969G:EJ:VQM;F]P7)S='5V=WAYBQN=V]R8FMC
5:75Q96AT9V1YF%L=G-P;6IX9BP)



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:21:13PM +0100, Joerg Jaspert wrote:
 Sven Luther wrote:
 
  In fact I strongly suggest switching to source-only after Sarge is
  released.
 seconded, and ubuntu has proven that it is possible.
 
 Ubuntu this, ubuntu that, ubuntu there, ...
 
 EH, just because ubuntu did it its good? Then why a no to the drop
 other arches - ubuntu only has 3 IIRC. It must be good.
 
 Just because Ubuntu did something and it may work for them doesnt mean
 its good for Debian.

Ok, let's look at it differently. I was against dropping source only uploads
back then 2 or 3 years ago, and proposed an automated build of all arches, and
this was rejected by the ftp-masters then. This was years before ubuntu
existed or at least was brought to common knolwedge. Now those same
people who rejected it back then are implementing it in ubuntu.

And yes, i think it would be good for debian.

Friendly,

Sven Luther




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Hamish Moffatt
On Mon, Mar 14, 2005 at 11:05:58PM +1000, Alexander Zangerl wrote:
 On Mon, 14 Mar 2005 23:49:44 +1100, Hamish Moffatt writes:
 On Mon, Mar 14, 2005 at 01:33:16PM +0100, Thiemo Seufer wrote:
  For anyone who uses Debian as base of a commercial solution it is a
  requirement. Grabing some random unstable snapshot is a non-starter.
 
 Sure. Who's doing that on anything but i386/amd64/powerpc?
 
 I am doing that on (ultra)sparcs, and I don't think I'm the only one.

Mutt says your signature on this post was bad.

Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 09:08:15PM +0100, Goswin von Brederlow wrote:
 Sven Luther [EMAIL PROTECTED] writes:
 
  On Mon, Mar 14, 2005 at 10:16:20AM +, Martin Michlmayr wrote:
  * Aurélien Jarno [EMAIL PROTECTED] [2005-03-14 10:56]:
   Would it be possible to have a list of such proposed architectures?
  
  amd64, s390z, powerpc64, netbsd-i386 and other variants, sh3/sh4, m32r
 
  ppc64 is not currently a candidate for a separate arch, the path to go is a
  biarch solution, much like the current amd64 solution in sarge, as word from
  the main ppc64 developers is that a pure64 solution will be a measurable
  performance hit on power (unlike amd64, which is saddled with a weaker
  instruction set to start with).
 
  One could add per-subarch optimized builds and mirrors too though. 
 
  Friendly,
 
  Sven Luther
 
 The way to go is to have a seperate architecture but with a very
 limited set of packages. This allows packages to keep the same name
 instead of prefixing with the bit with (libc6 instead of lib64c6,
 zlib instead of lib64z). This also hides the 64bit packages for users
 of ppc32 as their setup would not include the ppc64 architecture.

Well, the main problem about this is that consensus from the IBM linux ppc64
guys is that doing pure-64bit on ppc64 is a performance hit, and thus not
worth it. Instead it should be privilegied to run userland in 32bit over a
64bit kernel, and only do ppc64 builds of the apps that will benefit from them
(mostly databases and other much memory stuff). Of the other ppc64 supported
distribs, only gentoo has gone the pure64bit way (well because being a source
distrib, they can afford it, but they take the performance hit for it), and
suze/redhat/whoever has gone the biarch way, mostly for the above resons.

 Multiarch porting needed for i386/amd64 needs every package compiled
 for both. People do want pure 64bit amd64 installs. 3rd party

Sure, because going from 6 registers to 14 is a huge benefit.

 software for i386/amd64 needs a lot of libs, much more than ppc64
 would need, and doing those the current biarch way is insane compared
 to multiarch. It would also duplicate all those packages as 32bit
 amd64 and 64bit i386 package.
 
 And once multiarch is in the source ppc64 just has to start the
 buildd. Doing biarch for ppc on top of that would be stupid.

Well, when multiarch would be ready, then it can maybe moved that way, i don't
know, i think how the things are organized is orthogonal to how many packages
and which gets rebuilt. We first need a biarch toolchain so i can build ppc64
kernels.

 So my suggestion for ppc64 (and all other !amd64 multiarchs) is to
 either add lots of packages to p-a-s (more sensible would be a white
 list), add lots of packages to NFU in wanna-build or provide a large
 no-auto list to the buildds to keep out unneccesary packages and treat
 it just like any other arch otherwise.

Let's first try for a toolchain and proper ppc64 kernels and psutils binary,
we can build from there as needed.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Hamish Moffatt
On Mon, Mar 14, 2005 at 02:41:35PM +0100, Frank Küster wrote:
 Hamish Moffatt [EMAIL PROTECTED] schrieb:
  On Mon, Mar 14, 2005 at 01:33:16PM +0100, Thiemo Seufer wrote:
  For anyone who uses Debian as base of a commercial solution it is a
  requirement. Grabing some random unstable snapshot is a non-starter.
  Sure. Who's doing that on anything but i386/amd64/powerpc?
 
 What about embedded stuff?

Is is necessary/useful to have a *release* of Debian for those?

Typically embedded systems are highly customised. I have used our
powerpc builds in an embedded system and I was happy to start with
unstable.

Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Mark Brown
On Mon, Mar 14, 2005 at 09:35:38PM +0100, Goswin von Brederlow wrote:

 That suggests that FTBFS bugs for SCC archs will be ignored just as
 long, 1/2 - 3/4 of the planed release cycle. Now imagine a bug in fsck
 that destroys data being left open for so long.

In my experience doing this sort of thing this is the exception rather
than the rule and in most cases it's due to inactivity rather than
anything else.  I can only recall one case where I felt that the
maintainer was being actively unconstructive and there I do recall
thinking that the issue was more the maintainer's general attitude
towards quality and addressing user problems rather than a specific
hostility to porting issues.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:34:19AM -0500, Stephen Gran wrote:
 This one time, at band camp, Sven Luther said:
  On Mon, Mar 14, 2005 at 09:27:25AM -0500, Stephen Gran wrote:
   This one time, at band camp, Ingo Juergensmann said:
On Mon, Mar 14, 2005 at 12:47:58PM +0100, Julien BLACHE wrote:

 Moreover, the criterias given in your mail are just so oriented
 towards/against some architectures, that it's a bad joke (I was going
 to write disgusting, really).

It's a total change of direction: from as long as there are people who
care, we will release those arch to no matter if there are people who
care, we just release mainstream archs. :-(
   
   No, I thought the proposal stated quite clearly, if there are users and
   there are porters, a given arch is able to be included.  All that means
   is that those interested will actually have to do some of the work to
   support things like security and the kernel.  I know many of you already
  
  That is a joke. Do you really think that the porters don't care about the
  kernel ? I would really like that you don't drag the kernel-team in these
  petty claims, as i don't think that this is a problem kernel-wise. For
  example the first to go for the recent 2.6.11 kernels that are in the work
  where powerpc and sparc, and now s390 :
 
 See my other posts, and further down in this one.  I am not interested
 in offending anyone, and I am not actively involved in any of these
 areas.  I am speaking from watching from the sidelines:
  watching while Woody was delayed for months
  watching while Sarge is delayed for . . . (months? a year? more?)
  watching while security fixes are delayed for weeks or months
 
 These are all unacceptable.

There is no real evidence that the ports are the ones holding down the
release, and this is something that was told to us by most DPL candidates less
than a week ago too.

 That does not mean that I am saying either of a) drop the less frequently
 used arches, or b) the porters aren't doing thir jobs.  I don't believe
 either of these statements to be true.  If you are part of the good
 effort to port Debian to other architectures than the mainstream ones,
 I thank you.
 
 However, the people doing the heavy lifting of coordinating security
 releases, kernel management, and release management have spoken, and
 they've said they're overloaded.  That being the case, a change in the

They may be overloaded, but didn't accept help in the past.

 distribution of labor is needed, and it likely means that the porters
 have to pick up the slack.  If you are already doing so, I don't think
 you have anything to worry about.  The arches I see as having problems in
 the future are those that have unresponsive buildd admins and slow to act

Like arm for example, and who is the arm buildd admin ? 

 port teams (or only a single person, who could easily be overwhelmed).
 These arches probably shouldn't be released as stable, so no real
 loss there.

There is a huge difference between a reasonable answer to these problems, and
the we will drop all arches except the main 3 (and maybe pick up amd64).

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 03:52:22PM -0500, David Nusinow wrote:
 On Mon, Mar 14, 2005 at 09:25:02PM +0100, Julien BLACHE wrote:
  Sure that's good. It stops to be that good when they're obviously
  trying hard to impose their employer's agenda on the Project.
 
 Sarge was already very late before Ubuntu existed. Our mirror network was
 already strained before Ubuntu existed. Our release team was struggling to get
 sarge out before Ubuntu existed. Our security team was already undermanned
 before Ubuntu existed. d-i was short on contributers and had a hard time
 releasing before Ubuntu existed.

Oh, do you know when ubuntu started hiring debian devels ? I think it is at
least one year ago, but they may have started earlier than that.

Not saying that means anything, but i do believe that ubuntu already existed
at the time when the amd64 decision was made.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 04:10:23PM -0300, Humberto Massa wrote:
 Sven Luther wrote:
 incremental building supported? And finally, why isn't it considered a 
 technical solution?

 
 
 Because it is not needed for the fast tier1 arches ? 
  
 
 This is a chicken-and-egg thing, isn't it? And it should be considered a 
 *technical* solution, even if not a *political* one.

Ok, let me be blunt about this.

It is a political problem, the dpkg/buildd/ftp-master admin have not the will
to implement such a solution, and thus block any attempt to implement this
kind of problem.

We would need at least a dpkg and build tool upgrade for this to happen, and
the developer of those are the ones in the let's drop all arches camp.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 09:30:03PM +0100, Matthias Urlichs wrote:
 Hi, Humberto Massa wrote:
 
 They do not really, provided you keep about all the intermediate .o files of
 the preceding build, depending on the security fix naturally.
 
  My points are: (1) this is feasible/viable
 
 No it isn't. (a) the disk space requirements are humungous. (b) currently
 we do not verify that rebuilding without an intervening make clean
 works, i.e. that all dependencies are set up correctly. In fact, Policy
 mandates the make clean between builds. Therefore (c) you can't assume
 that your fixed source will result in fixed binaries. This is a security
 fix, so it's especially important that the binaries get fixed.

BTW, what about ccache ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 08:04:53PM +0100, Marc Haber wrote:
 On Mon, 14 Mar 2005 19:11:01 +0100, Sven Luther
 [EMAIL PROTECTED] wrote:
 Well, it just calls for smarther mirroring tricks.
 
 Do not expect mirror admins to run Debian, and to be willing to pull
 smart mirroring tricks.

What do they use now ? And does it not allow already to mirror only woody or
only some arches ? Or at least there is plan to allow for this ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Hamish Moffatt
On Mon, Mar 14, 2005 at 04:27:50PM +0100, Matthias Urlichs wrote:
 Hi, Hamish Moffatt wrote:
 
  especially given the requirement that you need = 2 buildds.
 
 I consider that requirement to be not warranted, and indeed unjustified.

I would like to hear the rationale for the requirement. Currently I
agree that it seems unjustified.

 If I had to think of a rationale for it, the only one I could think of
 would be the architecture needs to be fast enough not to block security
 updates.
 
 However, I consider an update whose $ARCH binaries are released a week
 later not to be a problem. 

Agreed.


Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Security Support and other reasoning (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Hamish Moffatt
On Mon, Mar 14, 2005 at 06:27:04PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 14:06, Sven Luther wrote:
  There was no comment from the security team about this new plan, we don't
  know for sure that this is the problem, we don't even know in detail what
  the problems are and how do they relate to the drastic solutions (in france
  we would say horse-remedies) proposed here.
 
 The problem I - as a system administrator - see is that waiting a week for a 
 security update might be not acceptable.

The alternative is that ALL architectures wait for security updates
until the tier 2 architectures are ready. Is that acceptable?


Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Sarge release (Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 07:23:33PM +0100, Christian Perrier wrote:
 Quoting Steve Langasek ([EMAIL PROTECTED]):
  Hello all,
  
  As promised earlier on -project[0], after the release team/ftpmaster
  team meeting-of-minds last weekend, we have some news to share with the
  rest of the project.
  
  First, the news for sarge.  As mentioned in the last release team
 
 
 It looks like the giant noise generated by this mail (I sometimes
 wonder whether some people are in Debian just to make noise and
 criticize every action as long as it doesn't fit exactly with their
 point of view.) has hidden the first topic : we nearly have a
 realease schedule and sarge release is becoming more and more reality.

Yes, but the utherly arrogant and despreciating way in how this announcement
was made didn't leave another issue. Imagine we decided tomorrow to throw all
the i18n work out the window and decided to support only english for etch.
This is the same kind of announcement for the alternative arches.

No data on what exactly the problems where, no data on the correlation of
those problems on the proposed drastic solutions, no minutes of the meeting,
no previous announcement to involve the porters, nothing.

 May I ask to people who have jumped on the architecture handling
 topic to please consider also the great work made during this work
 session about *other* topics and maybe just say something about it
 also:)
 
 Anyway, I take this opportunity to thank the involved people for their
 time and work as well as their commitment to the project.

Yep, but even on the d-i team, it seems the work of all those non-first-tier
arches was worthless after all, and not taken in consideration.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Tollef Fog Heen
* Matthias Urlichs 

| Hi, Goswin von Brederlow wrote:
| 
|  Another criterium amd64 fails is that packages must be DD build and
|  signed. That criterium seems realy silly. If the archive is not in Debian
|  but managed outside why should packages be exclusively DD build and
|  signed.
| 
| Well, if the archive isn't in Debian then the signing is obviously not
| required, but if it shall be *imported* into Debian as a new $arch, then
| you need either rebuild everything, or trust the person who signed the
| binaries -- i.e., you need a DD.

*shrug*; then we'll rebuild the archive.  It's not hard to do and
doesn't take long on an AMD64.  We've done it a few times already.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Thomas Bushnell BSG
Matthew Garrett [EMAIL PROTECTED] writes:

 It's not clear that an FTP site really satisfies that, and it's also the
 case that this is the FSF's interpretation rather than being the one
 that all GPL copyright holders hold. I'd worry that we might fall foul
 of some (seemingly valid) GPL interpretations.

If the relevant tools were to automatically fetch from the right
place, and we have administrative control over both, then we certainly
do come under that statement.



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Tollef Fog Heen
* Goswin von Brederlow 

| On that note I think amd64 fails the 5 DDs crtiteria. When we asked
| for inclusion we had 1 DD working on amd64 and several NMs I think. I
| think when we hit the 98% mark there were 2 DDs involved.

I can easily think of five DDs who would vouch for the inclusion of
amd64 into Debian.  There was a small crowd of them at FOSDEM.

-- 
Tollef Fog Heen,''`.
UNIX is user friendly, it's just picky about who its friends are  : :' :
  `. `' 
`-  


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



<    2   3   4   5   6   7   8   >