Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-21 Thread Sven Luther
On Sun, Mar 20, 2005 at 12:13:13PM -0500, David Nusinow wrote:
 On Sun, Mar 20, 2005 at 10:05:15AM +0100, Sven Luther wrote:
  On Fri, Mar 18, 2005 at 12:06:15PM -0500, David Nusinow wrote:
   On Fri, Mar 18, 2005 at 05:43:26PM +0100, Adrian Bunk wrote:
[1] The installer might be a point, but since all sarge architectures
will have a working installer and I hope there's not another
installer rewrite planned for etch this shouldn't be a big issue.
   
   This is still an issue. Joey Hess's mails have indicated very clearly 
   that it's
   difficult to get an installer release out even when all arches are already
   supported.
  
  This is a non-issue. The main problem was the kernel situation, which will 
  be
  streamlined for etch into a single package, and maybe build issues, which
  could be solved by a separate build queue or priority for d-i issues.
 
 You know, you keep saying this and I have a really hard time
 believing it, although I don't follow the kernel list so please
 enlighten me if I'm wrong. 

Ok.

 If you have a single source package for 12 different architectures
 that's great, because when you have a security fix you can take
 care of that more easily. That's awesome.

Indeed. And better yet, you build the .udebs from the same package, so you
don't need to build the kernel and then build the .udebs.

 But then you'll be trading off for the same problems that every
 single other packge faces: namely that if a kernel on a single arch
 has an RC bug then it affects the kernels on every arch. This strikes
 me as being very problematic, and the only way I see around it is
 to downgrade actual RC bugs, which isn't really a solution at all.

Then we rebuild. This has some implications for slower arches, and this is
post-sarge issue anyway, so there is time for it, but the only solution for
this is to do partial rebuilds, and have a testing override for kernels on
slower arches.

Still, my claim is that delays like the ones joeyh complained about would
mostly dissapear. A bit of context about them :

  1) a security fix causes an abi change.

  2) a new kernel-source gets uploaded.

  3) all arches need to individually upload kernel-images.

  4) since there was an abi change, the package name gets modified, and thus 
 has to wait in NEW for NEW processing.

  5) .udeb packages have to be built (usually by the debian-boot team), and
 uploaded. (don't know if this implicates a second NEW processing).

  6) the .udebs and .debs have to move into testing simultaneously to avoid
 GPL violation, and general messy dependency and rebuild issues.

  7) the debian-installer daily images need to be built out of SVN using the
 above .udebs and tested.

  8) the debian-installer package is upgraded in sarge, and uploaded.

  9) each individual autobuilders need to process this package, which may or
 may not be fast dependending on the actual auto-builder situation.

Doing this for all arches, with unsynchronized (and maybe off for a couple of
days) per-arch kernel/debian-boot maintainer causes no end of trouble,
especially with the additional delay the NEW processing imposes, is what
causes upto two month upgrade times which joeyh speaks about.

having a common kernel package will greatly simplify the parts of this process
which involve the kernel-team, and let you just do the security fix, build and
upload (either auto-built or hand-built), and then pass the baby to the
debian-boot people to handle as usual.

Hope this clarifies things a bit.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 04:31:57AM +0100, Thiemo Seufer wrote:
 David Nusinow wrote:
 [snip]
If you have a single source package for 12 different architectures
that's great, because when you have a security fix you can take
care of that more easily. That's awesome.
   
   We have that already.
  
  Great to hear. Then what is this new plan that the kernel team
  has? I'm definitely confused.
 
 For sarge, kernels are built in a two-stage process. First is to create
 a dsfg-free .deb from the upstream source which contains a source
 tarball, second is to build kernel images from another (arch-specific)
 .deb which build-depends on the source .deb. In the second stage,
 arch-specific patches can be added.

You forgot the third stage of the .udebs built.

 Post-sarge, it will be a one-stage process, which builds all kernel
 images from a single package.
 
But then you'll be trading off for the same problems that every
single other packge faces: namely that if a kernel on a single arch
has an RC bug then it affects the kernels on every arch. This strikes
me as being very problematic, and the only way I see around it is
to downgrade actual RC bugs, which isn't really a solution at all.
   
   Most kernel security bugs hit either generic code, or all architectures
   equally.
  
  Yeah, but I'm talking about non-security RC bugs. From what
  little Sven has described I feel like the new kernel plan will
  make it so these platform-specific bugs are problematic for all
  architectures. Does the new integration from upstream take care
  of this and if not, how does the kernel team plan to deal with
  this issue?
 
 Those bugs are felt to be seldom enough, especially for already
 released kernels. For active development, there's a constant stream
 of fixes anyway, platform-specific things won't make much difference.

And a kernel-team with people from each architecture will make resolving of
them easier than a single maintainer in his corner who may or may not have the
knowledge for this actual fix, and may or not have time/whatever to fix it.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 03:11:06PM +0100, Jeroen van Wolffelaar wrote:
 [ Please followup to the right list depending on the contents of your
 reply. Be aware I'm not subscribed to -kernel, so Cc me if needed ]
 
 On Mon, Mar 21, 2005 at 08:14:37AM +0100, Sven Luther wrote:
  [huge rant about NEW and hurting kernel stuff etc etc]
 
 Three remarks:
 
  Rejecting those would lead in a pissed kernel maintainer team i would say.
 
 Please be aware that NEW processing is human work. There's quite a big

which is my main grip with the subpart of it which could be automated. For
example, kernel-source-2.6.11 was just uploaded today, which means a plethora
of uploads all needing NEW processing. Can you give me any reason why this
really needs NEW processing, and why you don't thrust the kernel-team on this ?

 backlog (currently still over 300 while I feel a lot got done already),
 and I at least try to err on the side of caution. This means, and yes,
 it already happenen, that it will occasionally happen we will reject an

the problem is not the reject, is the no news in weeks and no communication
channel open. But again, i think and hope that this will become better now.

 upload by mistake. If this happens to you, just reply to the mail (as
 its footer says, if you don't understand the reject, reply) and it will
 looked into. Of course, if we decide it was a mistake and your package
 should be accepted, we'll process it out-of-order (The mistake I
 rectified yesterday was in NEW for 70 seconds, surely a record). Taking
 it as offence and acting accordingly could have negative effects on
 swift reprocessing.

There was no real swift processing in the past. Also, i believe that if
packages are being considered and have some problems, it would be best to
include the maintainer having made the upload into this process as early as
possible.

  I think i would have warranted at least a reply on this case, don't
  you think ? 
 
 Maybe, if one would reply to all mails you send out, one wouldn't have
 time for ANY other Debian work. For example, you contributed 75 mails[1]
 within 24 hours to the Vancouver thread, consisting (excluding quoted
 text) of about 7522 words in 43kB of hand-written text[2]. I'm sorry,
 but you think it's weird people can't resist accidentally hitting the 'd'
 key when seeing an incoming mail from you?

Well, sending email to a discussion forum like debian-devel, and sending email
to a debian-role like ftp-master is not comparable, and i think it shows a
profund lack of responsability on your part even suggesting this. How would
you feel about a developer ignoring bug report from a certain person just
because he has posted a big amount of emails to debian-devel ? And a
falling-in-his-duties DD has at least the QA team and the MIA check to watch
over him, while the ftp-masters can have any uncontrolled whim and we have no
choice but to abide by them.

Furthermore i see a serious failing in your logic, in the fact that the emails
you quote are posterior to the failure of reply from the ftp-master's office,
and can thus not be used to excuse it.

 Anyway, regarding kernels: I can imagine sometimes, especially with the
 backlog we have currently, a swift processing of some kernel package
 might be warranted and help Sarge. If there is such a case, it would
 help if someone other than yourself from the kernel team contact the
 right email address[3] about it, I had a hard time distilling from your

Why not me ? I would very much like a reason for that, am i in some way
blacklisted ? and if so for what reason ? And is this reason an acceptable
one, i seriously doubt so. I am part of the kernel team, and i did work on my
other packages which are more or less in good state, as well as actively
participated in the debian-installer work. Why should you not threat a
question on my part as from any other developer ? And if you do not, would it
not be understandable that i feel irritated by this inacceptable behavior that
has a blocking effect on my own participation to debian.

 mails if and which packages would genuinly benefit sarge if they were
 processed swiftly, of course together with a short and factual
 explanation. You can also try to make a release-team-person ask, but
 they are also busy people, so why bother them?

Whatever. I believe that your response to email send to ftp-master's role in
debian should not be influenced by any personal negative opinion you may have
on me, even if it may be warranted. We all work together to make the debian
release as great and swift as possible, and this kind of blacklisting of some
of our developers is inacceptable, and a severe failure in the ftp-master's
role responsability against the project.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: *** SPAM *** Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 03:10:34PM +, Matthew Wilcox wrote:
 On Mon, Mar 21, 2005 at 03:20:29PM +0100, Sven Luther wrote:
   Anyway, regarding kernels: I can imagine sometimes, especially with the
   backlog we have currently, a swift processing of some kernel package
   might be warranted and help Sarge. If there is such a case, it would
   help if someone other than yourself from the kernel team contact the
   right email address[3] about it, I had a hard time distilling from your
  
  Why not me ? I would very much like a reason for that, am i in some way
 
 Because you are impossible to deal with.  I think this mail from you shows
 all the characteristics which make you such a pain in the fucking arse.
 See a psychologist.  Really.

Thanks. Maybe i should resign from my debian duties then since i am not
wanted. Do you volunteer to take over my packages ? Please handle parted for
which i am searching a co-maintainer since  6 month, and take over the
powerpc kernels as well as do my job in the debian kernel team, as well as the
support of powerpc issues in d-i and the maintainance of a big part of the
ocaml subset.

Until you are ready to do that, it is not acceptable to imply that the
ftp-masters can be made to fail their job and threat developers like dirt just
because they have no counter power to them, and i should support every abuse
of them.

Not friendly anymore and expecting excuses from you Matthew and the whole
ftp-master team for their discrimination of me.

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: A new arch support proposal, hopefully consensual (?)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 12:26:13AM -0800, [EMAIL PROTECTED] wrote:
 It's not so fair to perpetrate a straw man attack against Sven's whole
 proposal just because he can't spell perfectly. Give the man credit
 where it's due for trying to better Debian.

Hehe, no offense taken, and i can understand Tapio's dislike of seing his
capital mispeled :)

 BTW, Sven and the Vancouver crew, I appreciate your collective thinking
 about what's right for Debian and the minor arches, being a SPARC user
 with a production Debian system.

No feedback from the vancouver team though, me guess they will do their stuff
in their corner, let's just hope that it will benefit everyone.

 --- Tapio Lehtonen [EMAIL PROTECTED] wrote:
  On Sun, Mar 20, 2005 at 12:45:33PM +0100, Sven Luther wrote:
   discussion forward in such a way that we can get a resonable
  discussion at the
   helsinski debconf'05 meeting.
   
  
  That's Helsinki, you ignoramus, you.

Yep, my bad, i ask apology for the mispelling. Actually i should have followed
my first instuition and say onyl debconf'05.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 07:42:11PM +1000, Anthony Towns wrote:
 Sven Luther wrote:
 And what do you say of aj denying there is a NEW problem on the debian-vote
 threads ? 
 
 I don't know what Steve says, but I say: Cite.

I don't care what you say, i am out of this anyway, there is no way i can
continue spending my free time in debian if it can be ignored just because
some people are to proud or whatever to even recognize they have made an
error, and force a hate-campaign on anyone who may just open his mouth and
critizice.

 I don't believe I said any such thing -- NEW processing has been a 
 problem for some months now, which is why we were working on adding a 
 couple of new people to process NEW.

No ? look at your replies on -vote ? 

 That said, I believe you and others have been ridiculously hyperbolic in 
 how large a problem it's been.

And i believe you and others have been hyperbolic in how you ignore the other
DD, just because you are in power and can do as you please with the project.

Really hurt by this treatment,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: *** SPAM *** Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 03:45:10PM +, Matthew Wilcox wrote:
 On Mon, Mar 21, 2005 at 04:08:19PM +0100, Sven Luther wrote:
  Thanks. Maybe i should resign from my debian duties then since i am not
  wanted. Do you volunteer to take over my packages ? Please handle parted for
  which i am searching a co-maintainer since  6 month, and take over the
  powerpc kernels as well as do my job in the debian kernel team, as well as 
  the
  support of powerpc issues in d-i and the maintainance of a big part of the
  ocaml subset.
 
 I think Debian would be better finding someone else to do those tasks,
 yes.  I'm not going to volunteer for them as I intend to leave Debian
 shortly after sarge releases.  I can't believe Debian is so short on
 skills that it needs you.

DON'T EVER ADDRESS ME IN THE FUTUR AND GET YOURSELF LOST.

Anyway, i am out of this and you and Jeroen have managed to do it, and all
those self-rigtheous ftp-master and other release team, who think someone
complaining just whines, and don't care that they do exactly the same, or
those who like to complain about being the recipient of flamewars, and then
doing the exact same thing to others.

Sven


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 03:11:06PM +0100, Jeroen van Wolffelaar wrote:
 Maybe, if one would reply to all mails you send out, one wouldn't have
 time for ANY other Debian work. For example, you contributed 75 mails[1]
 within 24 hours to the Vancouver thread, consisting (excluding quoted
 text) of about 7522 words in 43kB of hand-written text[2]. I'm sorry,
 but you think it's weird people can't resist accidentally hitting the 'd'
 key when seeing an incoming mail from you?

And what about the email i sent to remove some erroneously ACCEPTED and then
REJECTED kernel package from the REJECT queue ? I had to mail twice about
this, and nothing ever happened for almost a month of so, all the while you
where spamming all of debian-kernel daily with said bogus reject message ?

Hurt, 

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 05:40:44PM +0100, Petter Reinholdtsen wrote:
 [Sven Luther]
  the problem is not the reject, is the no news in weeks and no
  communication channel open. But again, i think and hope that this
  will become better now.
 
 I agree.  Complete silence and no feedback is a real problem when it
 happen, and only worse if it is an official debian role failing to
 communicate.  But I believe things are improving a lot when it comes
 to the ftpmaster role, and have great hopes that Jeroen is part of the
 solution. :)

No, he is not, as far as i am concerned, unless he presents his apologies
first.

  Well, sending email to a discussion forum like debian-devel, and
  sending email to a debian-role like ftp-master is not comparable,
  and i think it shows a profund lack of responsability on your part
  even suggesting this.
 
 I believe Joroen tried to express that mistakes do happen, and that
 the ftpmasters can delete email by mistake when their mailbox is
 filling up.

No, that is not acceptable, and probably not the right reason for this. Until
evidence proves otherwise, it is just because they don't care to read those
emails, and that that email address is simply forwarded to /dev/null.

 Perhaps this could be solved with some kind of ticket system handling
 email to the official roles in debian?  I'm not sure if BTS is the
 best option to handle emails to ftpmaster, leader and others.  Perhaps
 request-tracker is a better option?  We use it at work, and it seem to
 do request handling quite well (at least when we added the email
 administration interface. :).

That would be a solution. But then are the ftp-masters ready to get the
problems they receive publicly visible ?

 What surprises me is the energy and hostality Matthew Wilcox
 demonstrates by attacking you in later private emails.  A good thing
 he isn't part of the ftpmaster team (as far as I can see).  The
 ftpmasters seem to have a professional attitude towards the role they
 have in the project.  I wish we could expect that from all the
 participants in the project.

No, a professional attitude would have them reply to the people they are
working with.

 (But you are right, Sven.  No-one should have to accept abuse for the
 work one does as a volunteer in the Debian project.  That applies for
 both you, the ftpmasters, the release managers, all the debian
 developers and the users.  Those unable to behave sivilised against
 their fellow volunteers should be ashamed of themselves.)

but this have become the norm these past couple month, and Steve's 'proposal'
was the last straw.

Hurt,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 12:17:45PM +0100, Thiemo Seufer wrote:
 Sven Luther wrote:
 [snip]
   For sarge, kernels are built in a two-stage process. First is to create
   a dsfg-free .deb from the upstream source which contains a source
   tarball, second is to build kernel images from another (arch-specific)
   .deb which build-depends on the source .deb. In the second stage,
   arch-specific patches can be added.
  
  You forgot the third stage of the .udebs built.
 
 They can be built immediately after the kernel images were accepted,
 there's little potential for a delay.

Only if they get forgotten to build, which happens from time to time.

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels (Was: Re: NEW handling ...)

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 06:34:00PM +0100, Christian Perrier wrote:
  I'm quite unhappy that this thread has turned so bad.  Please, all of us
  who are part of this thread, can we please try to get the heat out.
 
 
 I can't agree more. What I have seen up to now is make me very
 sad. Seeing Sven considering to resign is sad news for me.

...

Thanks for this, it is hearthening (or however you say that in english).

I should really not have participated in that thread (and i resent a bit to
Steve for it), and i am probably better of not following debian-devel, as i
had not done for ages before. 

Still i believe i have made some constructive proposals, and even if my first
posts may have been a bit too aggressive, for which i apologize, or too many,
i think it is also a prove of the passion which lies on this issue. Something
which has the potential to affect many of what we believe debian is, and which
is handled by utter contempt, at least in the initial posting.

Still hurt though,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 08:40:11PM +0100, Christian Perrier wrote:
  I am truly sorry for loosing you.  You have done a good job helping
  Debian progress the state of free software, and it is sad that you
  decide to throw in the towel because of hard language from a fellow
  Debian volunteer. :(
 
 
 I personnally can't stop thinking that Sven can reconsider his too
 quick decision. Doing so would be a great sign of maturity and
 relativisation (sorry, I'm falling outside my English skills and could
 certainly express this better in French to Sven).

Yep, when sarge is safely out of the way in a couple of month or whatever.

Hurt,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling: About rejects, and kernels

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 08:28:44PM +0100, Petter Reinholdtsen wrote:
 [Sven Luther]
  No, he is not, as far as i am concerned, unless he presents his
  apologies first.
 
 For what?  Commenting on your wast amount of email posted the last few
 days, and his suggestion that the amount of email could make the
 ftpmasters delete mails by mistake?  I can not really believe that is
 your problem, so please enlighten me.

Sorry, but if they are not able to properly filter mails sent to the canonical
ftp-master address from the rest of their personal mail, i don't think they
are fit to do the job.

Also, his hints that futur mail from me will be ignored is unaceptable as
well, and i cannot work with people who don't take their responsabilities
seriously.

  No, that is not acceptable, and probably not the right reason for
  this. Until evidence proves otherwise, it is just because they don't
  care to read those emails, and that that email address is simply
  forwarded to /dev/null.
 
 I didn't say it was acceptable.  I tried to put it in perspective.
 I'm well aware of at least some of the communication issues with the
 ftpmasters, but truly believe these problems are because the
 ftpmasters are overworked, not because they are evil.  And I believe

The real problem is that they deny there is a problem, how do you hope to get
it fixed then ? 

 this even though one of the ftpmasters told me on IRC to stop wasting
 his time when I wanted to discuss making the list of packages in NEW
 public.  I put it on the account of misjudgement during stress, not
 evil will.
 
 I suspect you would be better off if you accepted that misjudgement
 and mistakes happen also for the ftpmasters.  After all, your emails

So, but then i expect the same courtesy to go both ways, which it does not,
and furthermore they have the ultimate power to hinder my work and make my
live difficult, while the otherway is not true. With great power comes great
responsability, and the lest of them is to be civil, and reply to emails sent
by developers to the ftp-master role address.

 haven't been the perfect examples of rational and clear speek either
 (though not as hostile as others on the list. :).  I do not hold that
 against you, and wish you didn't hold such miscommunications and and
 misjudgements against the other volunteers in Debian.

No, but they plainly refuse to admit there is a problem, what hope do you see
to it ever been fixed then ? 

  That would be a solution. But then are the ftp-masters ready to get
  the problems they receive publicly visible ?
 
 I didn't propose to make it all public.  request-tracker is capable of
 fine grained access control.
 
  No, a professional attitude would have them reply to the people they
  are working with.
 
 Again, I agree that the ftpmaster role should reply to all requests.
 But if the volunteers filling this role are very busy, it does not
 help to shout at them and send even more email.  A different solution

I sent perhaps 3-4 or in any case less than 10 emails to them, over the past
two years. A couple of those was to have them clean up the reject queue which
was spaming debian-kernel daily, this hardly is shouting and sending even more
emails, isn't it. I sent one mail, and waited, and in the email spam case a
second or third a couple of weeks later if i remember well.

Someone who has not the time to reply to 3-4 civil emails in 2 years, well, he
should probably reconsider his involvement or whatever.

 must be found, and I hope and believe we are on our way to a solution
 to the problems the project is facing.

Let's hope so, but i have some doubts.

  but this have become the norm these past couple month, and Steve's
  'proposal' was the last straw.
 
 I guess I do not read the proposal the way you read it.  I read it as
 a document describing the problems the release team and the ftpmaster
 experiences with the release process, and their ideas on how to
 improve the situation.  But first and formost, I read the proposal as
 a good step forward for the release of sarge.  After all, the ideas
 for reorganizing the process for etch wasn't the most important part
 of the vancouver announcement.  The most important part was that the
 release managers and the ftpmasters are coordinated in their work to
 release Sarge.
 
 Since the meeting 189 packages have been processed from the NEW queue.
 I believe this is the result of the meeting, where the ftpmasters was
 able to meet with prospective ftpmaster assistant.  I also believe the
 increased effort to release sarge is a result of this meeting.

What increasing effort, start a giant flamewar by being utterly contemptuous
of our porters ? They could have published that part separatedly and post
sarge or whatever. And i didn't see a single line of apology or recognition
that they may have been wrong.

 I am truly sorry for loosing you.  You have done a good job helping
 Debian progress the state of free software, and it is sad that you

Re: NEW handling: About rejects, and kernels

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 05:23:12PM +, Matthew Garrett wrote:
 Sven Luther [EMAIL PROTECTED] wrote:
 
  No, that is not acceptable, and probably not the right reason for this. 
  Until
  evidence proves otherwise, it is just because they don't care to read those
  emails, and that that email address is simply forwarded to /dev/null.
 
 This assertion isn't justifiable. I appreciate that you're upset about
 the amount of feedback you're getting from ftp-masters, but that doesn't
 mean you should take the opportunity to abuse them further. Doing so
 helps nobody, and certainly doesn't encourage them to send you more
 email.

I only state my own experience. I got nil reply, so what do you expect ?

 Problems with communication come from both sides. If you're rude to
 people, they become less likely to do useful stuff for you.

So ? And when i am asked to pass time on stuff because this or that need ot
the release, i should just do it ? What would you say of a maintainer who acts
like this ? QA would take over i believe since long time.

Hurt,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: How to define a release architecture

2005-03-21 Thread Sven Luther
On Mon, Mar 21, 2005 at 08:39:58PM -0300, Henrique de Moraes Holschuh wrote:
 On Tue, 22 Mar 2005, Peter 'p2' De Schrijver wrote:
  No. There needs to be some override procedure like we have for maintainers 
  not 
  doing their job. But that's beyond the scope of this discussion.
 
 In this case, there is nothing to override, because the overrides are
 actually changing something in the teams so that the team changes their mind
 (that might actually mean there is nobody who opposed the change in the team
 anymore, in a worst-case scenario).
 
 So, this should not be a point of contention in this sphere at all.  It
 belongs in some other level.  Let's drop this point as a contention point,
 then?

No, this is the main problem, that there is no counter power or limitation to
what they can decide. We saw this already in the amd64 GR issue, and we can
either accept their decission or have them resign in masse and be prepared to
replace them.

There is no accountability, and altough the DPL supposedly mandated them, he
has no actual power to do anything about it. 

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted ocaml 3.08.3-1 (powerpc all source)

2005-03-21 Thread Sven Luther
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Mon, 21 Mar 2005 07:46:26 +0100
Source: ocaml
Binary: ocaml-compiler-libs ocaml-native-compilers ocaml-base-nox ocaml-base 
ocaml ocaml-nox ocaml-interp ocaml-source
Architecture: source powerpc all
Version: 3.08.3-1
Distribution: unstable
Urgency: medium
Maintainer: Sven Luther [EMAIL PROTECTED]
Changed-By: Sven Luther [EMAIL PROTECTED]
Description: 
 ocaml  - ML language implementation with a class-based object system
 ocaml-base - Runtime system for ocaml bytecode executables
 ocaml-base-nox - Runtime system for ocaml bytecode executables
 ocaml-compiler-libs - Ocaml interpreter and standard libraries
 ocaml-interp - Ocaml interpreter and standard libraries
 ocaml-native-compilers - Native code compilers of the ocaml suite (the .opt 
ones)
 ocaml-nox  - ML language implementation with a class-based object system
 ocaml-source - Sources for Objective Caml
Closes: 287538
Changes: 
 ocaml (3.08.3-1) unstable; urgency=medium
 .
   * New upstream stable point version.
 - breaks binary compatibility, we thus have to up the ABI version
   to 3.08.3.
   * New features
 - ignore unknown warning options for forward and backward compatibility
 - runtime: export caml_compare_unordered (PR#3479)
 - camlp4: install argl.* files (PR#3439)
 - ocamldoc: add -man-section option (Closes: #287538)
 - labltk: add the solid relief option (PR#3343)
 - compiler: ocamlc -i now prints variance annotations
   * Bug fixes:
 - typing: fix unsoundness in type declaration variance inference.
   Type parameters which are constrained must now have an explicit variant
   annotation, otherwise they are invariant. This is not backward
   compatible, so this might break code which either uses subtyping or
   uses the relaxed value restriction (i.e. was not typable before 3.07)
 - typing: erroneous partial match warning for polymorphic variants 
(PR#3424)
 - runtime: handle the case of an empty command line (PR#3409, PR#3444)
 - stdlib: make Sys.executable_name an absolute path in native code 
(PR#3303)
 - runtime: fix memory leak in finalise.c
 - runtime: auto-trigger compaction even if gc is called manually (PR#3392)
 - stdlib: fix segfault in Obj.dup on zero-sized values (PR#3406)
 - camlp4: correct parsing of the $ identifier (PR#3310, PR#3469)
 - autoconf: better checking of SSE2 instructions (PR#3329, PR#3330)
 - graphics: make close_graph close the X display as well as the window 
(PR#3312)
 - num: fix big_int_of_string (empty string) (PR#3483)
 - num: fix big bug on 64-bit architecture (PR#3299)
 - str: better documentation of string_match and string_partial_match 
(PR#3395)
 - unix: fix file descriptor leak in Unix.accept (PR#3423)
 - unix: miscellaneous clean-ups
 - unix: fix documentation of Unix.tm (PR#3341)
 - compiler: fix error message with -pack when .cmi is missing (PR#3028)
 - cygwin: fix problem with compilation of camlheader (PR#3485)
 - stdlib: Filename.basename doesn't return an empty string any more 
(PR#3451)
 - stdlib: better documentation of Open_excl flag (PR#3450)
 - ocamlcp: accept -thread option (PR#3511)
 - ocamldep: handle spaces in file names (PR#3370)
 - compiler: remove spurious warning in pattern-matching on variants 
(PR#3424)
Files: 
 9ad7cf5ede053299c0f66446c23a5443 736 devel optional ocaml_3.08.3-1.dsc
 108c19ac909e90ea13b98248f9d1af96 42070 devel optional ocaml_3.08.3-1.diff.gz
 d6c5fdcae6a079dae52d4f8e5714e887 6452134 devel optional 
ocaml-nox_3.08.3-1_powerpc.deb
 c8ff24b74e27e7f88d51662e8c40dac7 3101254 devel optional 
ocaml-native-compilers_3.08.3-1_powerpc.deb
 bde9426a2826f740a7bfba92d68e1401 1820412 devel optional 
ocaml_3.08.3-1_powerpc.deb
 32adf884a7fd455a9acf947d8eee13ee 160594 devel optional 
ocaml-base-nox_3.08.3-1_powerpc.deb
 dd4bbf1500203322560049f17bbf4284 67454 devel optional 
ocaml-base_3.08.3-1_powerpc.deb
 4120a5b7f2a833eb711d2062561cd38f 2061594 devel optional 
ocaml-source_3.08.3-1_all.deb
 1c26009ef2ae46d5f9ce3a2062c8fd15 934870 devel optional 
ocaml-interp_3.08.3-1_powerpc.deb
 4155a37efa51103f7fb3d80712cbadb8 839998 devel optional 
ocaml-compiler-libs_3.08.3-1_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCPnls2WTeT3CRQaQRAuf/AKCP8zcVOW0O8cO2Fa3U1SF4AaMx2gCfcoq9
Cgr9AZvlB7VaiHuSAuHOo6U=
=w+vQ
-END PGP SIGNATURE-


Accepted:
ocaml-base-nox_3.08.3-1_powerpc.deb
  to pool/main/o/ocaml/ocaml-base-nox_3.08.3-1_powerpc.deb
ocaml-base_3.08.3-1_powerpc.deb
  to pool/main/o/ocaml/ocaml-base_3.08.3-1_powerpc.deb
ocaml-compiler-libs_3.08.3-1_powerpc.deb
  to pool/main/o/ocaml/ocaml-compiler-libs_3.08.3-1_powerpc.deb
ocaml-interp_3.08.3-1_powerpc.deb
  to pool/main/o/ocaml/ocaml-interp_3.08.3-1_powerpc.deb
ocaml-native-compilers_3.08.3-1_powerpc.deb
  to pool/main/o/ocaml/ocaml-native-compilers_3.08.3-1_powerpc.deb
ocaml-nox_3.08.3

Re: my thoughts on the Vancouver Prospectus

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 06:44:46PM -0800, Steve Langasek wrote:
 [cc:ed back to -devel, since these are technical questions being raised and
 answered]
 
 On Mon, Mar 14, 2005 at 10:48:10PM -0500, Branden Robinson wrote:
  The next stage in the process is to actually sell the proposed changes for
  etch to the developers at large[2].  There are several points which can and
  should be discussed; I myself am not certain what the motivations for some
  criteria are, and it would be good to have those documented so that we can
  tell if an when they no longer apply.
 
  Let me offer some examples:
 
  * Why is the permitted number of buildds for an architecture restricted to
2 or 3?
 
 - Architectures which need more than 2 buildds to keep up with package
   uploads on an ongoing basis are very slow indeed; while slower,
   low-powered chips are indeed useful in certain applications, they are
   a) unlikely to be able to usefully run much of the software we currently
   expect our ports to build, and b) definitely too slow in terms of
   single-package build times to avoid inevitably delaying high-priority
   package fixes for RC bugs.

Which is solved by going with delayed stable release, separate testing
process, and delayed security updates if necessary, so hardly a point.

 - If an architecture requires more than 3 buildds to be on-line to keep up
   with packages, we are accordingly spreading thin our trust network for
   binary packages.  I'm sure I'll get flamed for even mentioning it, but
   one concrete example of this is that the m68k port, today, is partially
   dependent on build daemons maintained by individuals who have chosen not
   to go through Debian's New Maintainer process.  Whether or not these
   particular individuals should be trusted, the truth is that when you have
   to have 10 buildds running to keep up with unstable, it's very difficult
   to get a big-picture view of the security of your binary uploads.
   Security is only as strong as the weakest link.

That said, it only affects said port, and i believe the people in the m68k
community may thrust them more than they thrust us, and rightly so, given how
the plan to drop them is.

 - While neither of the above concerns is overriding on its own (the
   ftpmasters have obviously allowed these ports to persist on
   ftp-master.debian.org, and they will be released with sarge), there is a
   general feeling that twelve architectures is too many to try to keep in
   sync for a release without resulting in severe schedule slippage.

But there are intermediate steps possible between full support and the current
let's forget about them and let them fend for theirself proposal you did make.

   Pre-sarge, I don't think it's possible to quantify slippage that's
   preventible by having more active porter teams vs. slippage that's
   due to unavoidable overhead; but if we do need to reduce our count of
   release archs, and I believe we do, then all other things being equal, we
   should take issues like the above into consideration.

Why didn't you take less drastic solutions in consideration ? Or if you did,
why didn't you speak about them ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: my thoughts on the Vancouver Prospectus

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 02:57:23AM +0100, Peter 'p2' De Schrijver wrote:
   * Three bodies (Security, System Administration, Release) are given
 independent veto power over the inclusion of an architecture.
 A) Does the entire team have to exercise this veto for it to be
effective, or can one member of any team exercise this power
effectively?
  
  It's expected that each team would exercise that veto as a *team*, by
  reaching a consensus internally.
 
 This is obviously unacceptable. Why would a small number of people be
 allowed to veto inclusion of other people's work ?

And a non-elected, non-properly-delegated, self-apointed group of people at
that.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 09:22:11PM +1000, Anthony Towns wrote:
 Sven Luther wrote:
 I think the main reply is for developers using said archs.
 
 Developers *developing* on those architectures need to use unstable 

But it could be an unstable chroot, while their day-to-day work is done with
testing, which is the best way to detect problems early.

 anyway. If there aren't any users, then there's no much point doing any 
 development. Are there any users? If so, what are they doing?

So, the DDs are just slaves of the release team/ftp masters, and don't count
as users ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 02:40:34PM +0100, David Schmitt wrote:
 On Friday 18 March 2005 13:26, Sven Luther wrote:
  And yes, i volunteer to help out NEW handling, if that help is wanted.
 
 Vapourware. I believe that for most packages it is quite easy to see why they 
 are not allowed into unstable. Compile this list+reasons so that everyone who 
 is interested in these packages can quickly see where the problems are. If 
 there is any interest in contents of NEW this list would be very handy to get 
  
 a quick overview of the problems plaguing NEW packages.

I can even tell you now all the easy ones: all libraries which are policy
mandated to change their source name in case of soname change. The
kernel-source and various kernel-patch/image/whatever package or other
packages which need to have the version number embedded in the package name.
Source package which gain or loose a couple of binary packages in a reasonable
and easy-to-autocheck way.

 Having a website separating the hard cases from the easy ones is the first 
 step needed to get a discussion about the rest going.

no, first step is getting a guarantee that the above will be useful and
accepted, or at least considered by the ftp-masters, or it is just work
that will be thrown away, and i have better things to do than that.

 And discussion in this case doesn't mean posting long rants from the 
 uploaders on d-devel how unfairly the cabal has ignored his package since he 
 uploaded it five years ago to NEW and never cared afterwards.

I on various case posted to ftp-masters about some of my packages in NEW,
which where important to get processed for whatever reason. I never got a
single reply on any of those.

But let's hope that the new blood and organisation of the ftp-master's team
will help get this situation to manageable proportions, as new blood helped in
the NM case, and others too.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 05:05:07PM +0100, Joerg Jaspert wrote:
 
  And yes, i volunteer to help out NEW handling, if that help is wanted.
 
 Just for the record, not to anyone directly, it just fits here:
 This is not how it works. Offering something randomly and then sitting
 back waiting, later maybe complaining offer wasnt accepted.
 
 The way I got into the ftpteam was simple to do the work.
 ssh to merkel, looking at the changes files if I find a reason for the
 package to go out of new, compiling a list of stuff, feeding it to one
 of the guys who could run lisa on it.
 Done that a few times with some long lists, got some packages out of
 NEW. Now I do it myself...
 
 So, If you want anything to be done: Dont write mails about it how it
 could be done, just do it, anything else is just to be treated as the
 stuff our politicians say...

Easy thing. i tried to help ftp-masters on some case which where important and
where packages got undully retained in NEW for long time (and these where the
powerpc kernels, which on two occasions had a 1 month waiting time in the
past, and the kernel-latest-powerpc metapackages, which will not be used by
d-i because of the 1.5 month waiting time), and i didn't get a single reply
to this.

And do you seriously think that the ftp-master team would have been expanded
like it has, if the issue would not have aired publicly recently ? I have some
doubts about it.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 02:34:12PM +0100, Joerg Jaspert wrote:
 On 10232 March 1977, Sven Luther wrote:
 
  Would you be happy if the ftpmasters put everything on auto-veto if there
  was nobody available to monitor the auto-new queue for a few days?
  If the NEW queue handling people can't get the job done, then they should
  recruit more people to help out on this instead of making the whole project
  suffer from their lack of disponibility.
 
 Read -project and stop whining.

subscribed there now and reading. Welcome to you on the ftp-master's team, and
i hope you will make a great job there. 

Why was the announcement not posted on debian-devel-announce or here though ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Proposal] $arch release assistants

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 01:01:44AM +0100, Bill Allombert wrote:
 Hello Debian-developer,
 
 I have a modest proposal to reduce the burden of the multiple
 architectures on the Release team. This is based on the following
 assumptions:

Yep, great proposal, i think this would also be a solution. Notice the
following exchange i had with vorlon (Steve Langasek) on irc though :

  10:26  svenl vorlon: would you take per-arch-release managers if they
  offered themself up ?
  10:27  svenl release-manager-assistant or something such ?
  10:28  vorlon svenl: I think it'd be better if we first had active porter
  teams working to address the criteria mentioned in the mail
  10:29  svenl vorlon: you know, you antagonized all the porters by that
  email, and the criterias are mostly bogus since not backed by analysis and
  how they solve real problems instead of imagined ones.
  10:31  vorlon svenl: shrug I don't really care if you consider these
  issues imagined or not; *most* of those criteria are grounded in
  real-world concerns, and I don't really consider them negotiable

But let's hope that this, together with my own proposal to handle tier-2
arches testing and possibly releases i will post soon may help here, will help
bring things forward.

Friendly,

Sven luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 11:24:14AM +1000, Anthony Towns wrote:
 beyond unstable + snapshotting facility, and why? Debian developers 
 manage to develop on unstable fairly well, eg, why isn't that enough? Is 
 this just a PR issue, in that unstable and snapshot aren't something 
 you can put on a brochure or brag about on slashdot?

I wanted to do a test-install on powerpc using sid yesterday (or was it
friday) using the desktop task, in order to test that the new udev/makedev
combo did indeed fix the RC bug (300166/300170). But for some obscure reason,
tasksel failed to install the the desktop task, didn't even provide a log or
something for the reason.

This exactly is why we need testing, because unstable can get hit by random
breakage, and using whatever snapshoting of unstable for those minority
arches, means the porters will get hit by any number of random problems, not
even arch specific, and thus in addition of doing their porters work, need to
do all the work done by the testing scripts and the release team right now.

Which is why i proposed a build-from-testing method instead, which has the
problem of porters needing to upload twice, once to unstable to fix the issue,
and once to arch-unstable if the upload to unstable fails to reach testing for
whatever issue.

So, could you, as the testing script master-mind :), give us some hints as of
a per-arch sub-testing script would be possible ? 

I mean, we have unstable where everyone uploads their packages, and then we
have testing which gets filled from unstable by the testing-scripts for the
tier1 arches.

The idea would be to have for each tier2 arch a separate testing script,
running on scc or whatever hardware, and filling a per-arch testing from
unstable, but with the added limitation that the package needs to be in
testing to go to the per-arch testing, and individual hinted overrides would
be possible for arch-specific problems, which could also be uploaded through
testing-proposed-updates.

This way, tier1 testing doesn't need to wait for tier2 arches at all, but
tier2 arches can still get the benefit of testing at a lesser cost.

I would look at the code, and implement this myself, but i don't speak python,
so i am utherly useless for this kind of things.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 12:00:23PM +1000, Anthony Towns wrote:
 Darren Salt wrote:
 I demand that Anthony Towns may or may not have written...
 Put them behind a firewall on a trusted LAN, use them to develop software
 for arm chips, and then just follow unstable or run non-security-supported
 snapshots. Apart from writing software for embedded arm things, I can't 
 see
 the value
 Linux desktop box comes to mind...
 
 But why would you spend over 1000 pounds on an arm Linux desktop box 
 instead of a few hundred pounds on a random i386 desktop box?

Because you don't want a 100+W dissipating screaming monster on your desk ?

 A reasonable answer is because you're developing for arm's for embedded 
 applications; but if so, what's the big deal with using unstable or 
 snapshots, and running your public servers on other boxes?

Because using unstable is not a workable solution. Try to make a daily
unstable install, and count how many days it is broken on the tier1 arches,
and see how worse it can become on tier2 slower arches.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 11:25:22AM +1000, Anthony Towns wrote:
 Henning Makholm wrote:
 The question is whether the *porters* think they have a sufficiently
 good reason to do the work of maintaining a separate testing-esque
 suite. If the porters want to do the work they should be allowed to do
 it.
 
 If they don't need any support from anyone else, they're welcome to do 
 whatever they like. If they want other people to help them, I don't 
 think it's unreasonable to expect an answer to a What's the point? 
 question.

as long as the replies don't get ignored.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 01:48:42AM -0800, Steve Langasek wrote:
 Hi Greg,
 
 On Tue, Mar 15, 2005 at 02:10:47PM -0500, Greg Folkert wrote:
BTW, I am not sure this is really a good way to measure the use of an 
architecture, mainly because users could use a local mirror if they 
have 
a lot of machines of the same architecture. How about using popcon *in 
addition* to that?
 
   This isn't being used to measure the use of the architecture; it's being
   used to measure the *download frequency* for the architecture, which is
   precisely the criterion that should be used in deciding how to structure
   the mirror network.
 
  Okay, I have to comment here, seeing that I personally have at two
  separate locations, two complete mirrors, that I use nearly everyday.
  They only update when a change in the archive is detected. That means
  *MY* $PRETTY_BIG_NUMBER of usages of my own mirrors in each locale will
  mean nothing. I do my own mirror(s) so as to reduce the load on the
  Debian network. I actually scaled back what I use, now only having 5
  arches I support, SPARC(and UltraSPARC), Alpha, HPPA-RISC, PowerPC and
  x86(Intel and otherwise). I dropped IA64 a while ago and will pickup
  X86_AMD64 when it become part of Sid Proper.
 
  How would you address the fact the bulk of my usage is not even seen by
  your network.
 
 Hrm, in what sense is this something that needs to be addressed at all?
 If you use an internal mirror for your heavy internal usage, then surely
 you, as a user, don't need a diverse network of full public mirrors -- you
 just need one, solid mirror to download from, don't you?

Because there is a mess between the pure mirror issues and the
testing/security/release issues. And i get the impression (maybe wrongly) that
the lack of download may somehow influence the decision to drop those arches
from the testing/security/release side too, at least you have not hinted at
the contrary.

I think the mirror issues are fully non-problematic, and everyone agrees with
them, it is the other issues which are problematic.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Fri, Mar 18, 2005 at 12:06:15PM -0500, David Nusinow wrote:
 On Fri, Mar 18, 2005 at 05:43:26PM +0100, Adrian Bunk wrote:
  [1] The installer might be a point, but since all sarge architectures
  will have a working installer and I hope there's not another
  installer rewrite planned for etch this shouldn't be a big issue.
 
 This is still an issue. Joey Hess's mails have indicated very clearly that 
 it's
 difficult to get an installer release out even when all arches are already
 supported.

This is a non-issue. The main problem was the kernel situation, which will be
streamlined for etch into a single package, and maybe build issues, which
could be solved by a separate build queue or priority for d-i issues.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 04:19:03AM -0800, Steve Langasek wrote:
 On Fri, Mar 18, 2005 at 05:43:26PM +0100, Adrian Bunk wrote:
  On Thu, Mar 17, 2005 at 09:47:42PM -0800, Steve Langasek wrote:
   On Mon, Mar 14, 2005 at 07:59:43PM +, Alastair McKinstry wrote:
 AFAI can tell, anybody can host an archive of packages built from 
 stable 
 sources for a scc or unofficial port. And - if I read the conditions 
 on 
 becoming a fully supported Debian arch right - then having security 
 support 
 for an external pool of this arch is a good indicator that it should 
 be a 
 fully supported stable release (amongst other things).
 
The plan as proposed is that the Debian scc ports are purely builds of
unstable. Hence this build out of the last release (e.g. etch) becomes a
subproject of a second-class project of Debian. It effectively has
little credibility.
 
   Well, the release team are not the only Debian developers with 
   credibility,
   surely?  Not everything needs to go through us; if the project has the 
   will
   to do stable releases of these architectures, in spite of the release team
   being unwilling to delay other architectures while waiting for them, then
   it should be very possible to provide full stable releases for these
   architectures.
  ...
 
  Which delays are expected for etch, that are not only imposed by the 
  usage of testing for release purposes? [1]
 
  I do still doubt that testing actually is an improvement compared to the 
  former method of freezing unstable, and even more do I doubt it's worth 
  sacrificing 8 architectures.
 
 If the proposal already gives porters the option to freeze (snapshot)
 unstable to do their own releases, in what sense is this sacrificing
 architectures?  It sounds to me like it's exactly what you've always wanted,
 to eliminate testing from the release process...

Because this means that all the job you do for testing has to be redone on a
per-arch basis without reason, and you perfectly know how much work that is.

And this means that the porter have less time to do their real job, which
means an additional push to the ports in question into an early grave.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



A new arch support proposal, hopefully consensual (?)

2005-03-20 Thread Sven Luther
 the
 exact mechanism would work, discussion accepted.

Ok, this would take care of having a testing infrastructure, which altough it
would not hold on the tier1 testing, will still try to be synced with it,
depending on the archive speed.

The next step is the release process.

 9) tier1 arches decide to release as ususal. At this point the tier2 arches
 decide individually what they want to do, if they are ready for a release, or
 don't want to try for it.

 10) if they want to release, they will either freeze unstable, or fork it or
 whatever, and will work with their copy of testing to stabilize it with regard
 to the released tier1 arch, and make uploads to stable-proposed-updates (or a
 separate stable-arch-proposed-updates) for the sole reason to make
 arch-stable releasable. Uploads should be minimalistic and only needed to fix
 arch-specific breakage, and tested on tier1 arches before being accepted in
 stable-proposed-updates. It is the responsability of the porters (or a
 porter-support-team), to do this testing.

 11) if they succeed in this, those updates can be made part of a future
 stable point release.

Now to the security setup, which is only handling stable releases and work
through stable-proposed-updates or something similar, as well as the
stable-security stuff. The vancouver proposal has said on the behalf of the
security team, that the problem in doing security updates may be twofold,
correct me if i am wrong : 

  a) security builds need to build in a finite amount of time, in order to not
  delay security updates past their end-of-embargo time. Notice that the
  embargo time is often more than a week or something such though.

  b) arch-specific security issues need someone to investigate fix and build.

  c) security work needs NDA to get advance warning of security issues.

I don't think there are other issues. The vancouver proposal just drops
security for non-tier1 arches, without further posibility.

The counter proposal would be :

  1) tier1 arch are supported by the security team, and handled as usual. The
  announcement goes out without waiting for tier2 arches.

  2) each port which want to do security upgrades has to have a
  security-representative, which would be under NDA and have the possibility
  of getting access to the security info during the embargo, and being able to
  do the security build well in advance of the announce date.

  3) at security announcement, the security team does provide info about all
  the ports which made the build, and inform that builds are upcoming on
  slower arches that are building it.

That should be make everyone involved happy.

What is needed to make this happen ? We need (have) :

  1) the tier1-team, comprising ftp-masters, release-managers, security-team.
  Those would work as usual, but with fewer arches, and make a decision of
  dropping/promoting arches from/to tier1. There may be some initial setup
  cost, but it should not be all that high, and other folk can help if needed
  without being part of the tier1-team.

  2) each arch needs to provide :

- a buildd network able to build the packages.
- one machine handling the per-arch testing script, and other overhead.
- one arch-release-assistant (or team), with power to handle the
  arch-specific testing script and be able to follow the issue.
  (see bill's proposal for details on this).
- one arch-security representative, which will be able to get early access
  to the embargoed security issues, and build the security fix if the
  tier1 - security can't handle those.

  3) the tier1-team could be adjoined a port-support-team, with would handle a
  certain number of issues common to all arches.

  4) the tier1-team will *NOT* make live difficult for the porters, and keep
  considering them when they make decision, even though they may not have
  other choice than go with some port-harming solutions.
  
Well, that is my proposal upto now, a bit different from the one i posted
previously, but i believe many think it a good idea to try to solve this issue
positively, and think like me that debian without the many arch support, or at
least trying for it, has partly lost his soul.

I would like to have feedback on this from :

  1) the vancouver-proposal authors.
  2) the other members of the upcoming tier1-team, who are not mentioned in
  the vancouver document.
  3) the porters.

Since obviously without input and agreement of all of them, there is no chance
of solving this. And please be positive about your input, as we are all in
this together and want to solve it, don't we ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: orphaning packages

2005-03-20 Thread Sven Luther
On Sat, Mar 19, 2005 at 06:55:37PM +0100, Sergio Rua wrote:
 Hello,
 
 My GPG was compromissed before Xmas and since then, I was unable to get
 a new key. Two of my packages are getting full of bugs which I can fix and
 close so I decided to orphan them and if I'm be able to get new
 key in the future, I'll find new packages to mantain.
 
 They are:
 
   openwebmail
   partimage

I would like to at least comaintain with you partimage. We could move the
packaging to a common svn repo on alioth or something. I don't feel like
taking over sole maintainership of it though, but a cooperative effort with
you (with me signing and uploading the packages until you can again), would be
a good thing.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: my thoughts on the Vancouver Prospectus

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 10:26:44PM +1100, Daniel Stone wrote:
 On Sun, Mar 20, 2005 at 09:07:52AM +0100, Sven Luther wrote:
  On Sun, Mar 20, 2005 at 02:57:23AM +0100, Peter 'p2' De Schrijver wrote:
 * Three bodies (Security, System Administration, Release) are given
   independent veto power over the inclusion of an architecture.
   A) Does the entire team have to exercise this veto for it to be
  effective, or can one member of any team exercise this power
  effectively?

It's expected that each team would exercise that veto as a *team*, by
reaching a consensus internally.
   
   This is obviously unacceptable. Why would a small number of people be
   allowed to veto inclusion of other people's work ?
  
  And a non-elected, non-properly-delegated, self-apointed group of people at
  that.
 
 Are you suggesting replacing the entire release and ftp-master teams?
 If so, please suggest who you would like in that role instead (or if we
 should all vote on it -- because hey, every position in Debian needs to
 be elected).
 
 Are you suggesting that everyone in Debian who has not been elected to
 their position should be elected so?

No i am suggesting that it is not the responsability of a small non-mandated
group to decide singlehandedly to kill 2/3rd of our ports, nor are mandated to
reject the work of a whole subgroup of DDs on a whim.

The ftp-masters are mandated by the DPL to handle the debian infrastructure,
not to decide what arches debian should support or not. And the vancouver
document somehow showed a willingness on the part of some of them to fork
themselves away from the rest of debian, and they hold us hostage by the
responsabilities on key position they hold on the project, which altough being
a huge amount of work and i admire them for it, it doesn't do to belittle the
contribution of the other DDs like that.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 09:34:01AM +0100, Matthias Urlichs wrote:
 Hi, Thomas Bushnell BSG wrote:
 
  [EMAIL PROTECTED] (Marco d'Itri) writes:
  
  That on some servers I'd like to mirror both archives, and I'd rather
  not waste a few GB on duplicated files.
  
  So don't duplicate them and use fancier mirroring software.
 
 We can't. AFAIK: One or two rsync commands, and *that's*it*.
 
 Any required fanciness need to be done on the master server.

Rsync can take fancier arguments though :)

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: A new arch support proposal, hopefully consensual (?)

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 04:59:57PM +0100, Thomas Viehmann wrote:
 Sven Luther wrote:
 
 Problems with many arches:
   - same for the security team.
 Hmm. I only saw Joey's message on the subject, which basically seemed to 
 say as long as it's only one source compiling on all arches, it's OK

Yep, i read it only after having posted this email though, and it came as a
surprise to me, since the vancouver team claimed to have had input from Joey
prior to the meeting, even though he could not come.

  7) the porter team has the possibility to providing arch-specific 
  overrides
  to solve the issue of a package not passing from unstable into testing 
  due to
  a tier1-specific RC bug or whatever. Should be used sparingly though.
 This seems problematic in this respect.

Well, the idea is that you always upload arch-specific patches to unstable,
but that when these patches get blocked for some random reason (like making
KDE uninstallable on x86 because of some unrelated other fix), then you can
either choose to make a arch-specific override, or go for a
arch-testing-proposed-updates upload. You seem to say that the second is
preferable, but it probably depends on the circunstances. In any way, the fix
will be uploaded to unstable.

Notice that if we had all packages in a common revision control system, and
that we where able to say we would apply one patch, momentatily backup all
others since the last version in testing, and redo a build against the
packages in testing, this would make things a whole lot easier.

 I might have missed the previous suggestions or the obvious flaws of the 
 idea, but why not have something along the lines of releasing all 
 'tier2' arches with the packages they have, i.e. agressive per-arch 
 removal for uninstallable/unusable/not-up-to-date packages. Those arches 
 that have something worth releasing at release time (installer, all 
 priority = important, x% of optional in usual release quality) do that. 

The idea is that we don't want to hold up release, but we still want to allow
for a future release at a later point, in a stable point release. Especially
now that we are told that security is not an issue.

 This way, the security support of the additional arches would stay 
 largely the same. One could have the present testing rules up to some 
 point and switch to if arch-specific RC bugs/testing delays pop up, 
 stuff get removed for release.

Not sure if this is a good idea. The main point will be for the arch-specific
fix to get in in a timely fashion, and it being blocked by unrelated
tier1-breakage, not to remove the package and thus remove the fix. If you are
saying that we should in this case remove the tier1 packages from testing
though :)

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 10:53:57PM +1100, Hamish Moffatt wrote:
 On Sun, Mar 20, 2005 at 09:56:05AM +0100, Sven Luther wrote:
  On Sun, Mar 20, 2005 at 12:00:23PM +1000, Anthony Towns wrote:
   But why would you spend over 1000 pounds on an arm Linux desktop box 
   instead of a few hundred pounds on a random i386 desktop box?
  Because you don't want a 100+W dissipating screaming monster on your desk ?
 
 You can get low power x86 systems that have much better performance ( 1
 GHz).

They would be too slow for autobuilder work though.

   A reasonable answer is because you're developing for arm's for embedded 
   applications; but if so, what's the big deal with using unstable or 
   snapshots, and running your public servers on other boxes?
  
  Because using unstable is not a workable solution. Try to make a daily
  unstable install, and count how many days it is broken on the tier1 arches,
  and see how worse it can become on tier2 slower arches.
 
 Most work for embedded systems would be cross-compiled from faster
 systems anyway.

Yeah, and most people doing it this way use windows as development platform
anyway, i know.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: A new arch support proposal, hopefully consensual (?)

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 06:24:23PM +0100, Thomas Viehmann wrote:
 Sven Luther wrote:
 The idea is that we don't want to hold up release, but we still want to 
 allow
 for a future release at a later point, in a stable point release. 
 Especially
 now that we are told that security is not an issue.
 
 This way, the security support of the additional arches would stay 
 largely the same. One could have the present testing rules up to some 
 point and switch to if arch-specific RC bugs/testing delays pop up, 
 stuff get removed for release.
 
 Not sure if this is a good idea. The main point will be for the 
 arch-specific
 fix to get in in a timely fashion, and it being blocked by unrelated
 tier1-breakage, not to remove the package and thus remove the fix. If you 
 are
 saying that we should in this case remove the tier1 packages from testing
 though :)
 
 Well, you'll at most get classic Security-Support for those sources 
 that match the regular ones and I doubt that the policy for point 
 release will - or should - be weakened to allow arch-fixes. My 

Why not ? the aim is to provide stable releases for those arches, not to drop
them and let them handle of their own, so some arrangement needs to be made.

This is i think the price we have to pay for letting these arches get out of
sync with testing.

 impression was that a split into supported / less supported (yeah, 
 reminiscent of a popular derivative) of the ports would reduce the total 
 amount of work while not overburdening the general process.

Yes, but without dropping support.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: my thoughts on the Vancouver Prospectus

2005-03-20 Thread Sven Luther
On Mon, Mar 21, 2005 at 01:16:42AM +1000, Anthony Towns wrote:
 Sven Luther wrote:
 The ftp-masters are mandated by the DPL to handle the debian 
 infrastructure,
 not to decide what arches debian should support or not.
 
 This is not the case; ftpmaster's role has historically included at what 
 point architectures can be included in the archive (and in sh's case, at 
 what point they should be removed), and the release manager's role has 
 included at what point an architecture is suitable for release.

Ok, point taken.

 For an earlyish example of an RM (in this case me) setting explicit 
 requirements for considering an architecture for release, see:
 
 There are four ports, any of which may want to try for a woody release:
 hurd-i386, mips, hppa and ia64. If they do, they need to ensure that
 their port has stabilised and is ready for mainstream use, that the
 relevant required, important and standard packages have all been ported,
 that they have a functioning autobuilder (or two) that can keep up with
 unstable (and is keeping up with unstable) and that it's built a fair
 chunk of optional and extra, and they need to ensure that they can get
 boot-floppies working in the above time frame.
 
  -- http://lists.debian.org/debian-devel-announce/2001/05/msg3.html
 
 I'm not aware of Martin undelegating those classes of decisions.

Well, i believe that previous to Marting as DPL, none of them where officially
delegates. I might be wrong though, but i think one of the first acts of
Martin as DPL was to discuss with them and officialize the delegation.

 In the broader sense, of course, no the ftpmasters don't decide what 
 architectures Debian will support -- just those that'll be supported in 
 the archive proper. AMD64 is an example of an architecture that falls in 
 between those two categories.

They still have an unbalanced power of decision which was applied unilateraly
in the vancouver proposal.

 [...] and they hold us hostage [...]
 Friendly,
 
 It seems odd to pretend to be friendly towards people you consider 
 hostage takers. Or to call people you claim to be friendly towards 
 hostage takers.

Well, i belive, like Andreas, that debian will only succeed if we all have a
friendly attitude toward each other, and are all in this because we have fun
in it. Now, you can critic friends, i believe, especially if you think they
are behaving wrongly, i have been criticed such, and have criticed others, but
we still are all parts of the same comunity, and hopefully will have fun
drinking beers together in helsinsksi or whatever.

Now, the vancouver proposal has some real problem this way, since i believe
that it tries to take the project in a certain direction where not everyone
wants to go, and brings with it some real risk of an actual fork of the
project over this.

Also i wonder how some of our sponsors feel about this, like how HP would feel
if we were to drop hppa and ia64, would they still like us ? Or Sun, who (used
to ?) donate(d) hardware for our infrastructure.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 08:20:40PM +0100, David Schmitt wrote:
 On Sunday 20 March 2005 12:08, Sven Luther wrote:
  On Fri, Mar 18, 2005 at 02:40:34PM +0100, David Schmitt wrote:
   On Friday 18 March 2005 13:26, Sven Luther wrote:
And yes, i volunteer to help out NEW handling, if that help is wanted.
  
   Vapourware. I believe that for most packages it is quite easy to see why
   they are not allowed into unstable. Compile this list+reasons so that
   everyone who is interested in these packages can quickly see where the
   problems are. If there is any interest in contents of NEW this list would
   be very handy to get a quick overview of the problems plaguing NEW
   packages.
 
  I can even tell you now all the easy ones: all libraries which are policy
  mandated to change their source name in case of soname change. The
  kernel-source and various kernel-patch/image/whatever package or other
  packages which need to have the version number embedded in the package
  name. Source package which gain or loose a couple of binary packages in a
  reasonable and easy-to-autocheck way.
 
 The way you say that leads me to the conclusion that you are only guessing.
 
 Do you really want to know how many libraries in NEW currently are waiting 
 for 
 a binary with a new soname?

Ok, i take that back, i just learned that the library case is somewhat more
complicated, since sometime they change soname in binary packages but not in
library packages.

This most certainly doesn't apply to kernel-source packages though, where an
abi change security upload will need at least one NEW processing per arch, and
often only the first upload gets processed, and the arches who come later are
left to smolder in the NEW queue forever. Well, the new blood in the
ftp-master team will make this an issue of the past probably.

 One:
 
 liboil0.3  0.3.0-1
  source i386 unstable
  2 months  David Schleef  #284486
 
 liboil  0.3.1-1
  source i386 unstable
  2 days  David Schleef

Ok, so what ? 

 Let's take a look at kernel images stuck in NEW:
 
 $ egrep '^td[^]*/td' new.html | cut -d '' -f 2 | cut -d '' -f 1 | grep 
 kernel 
 kernel-patch-2.4-blooper
 kernel-patch-2.4-pom
 kernel-latest-2.6-hppa
 kernel-patch-suspend2
 kernel-image-2.6.8-ia64
 kernel-image-2.6.10-sparc

And at least this sparc kernel should be processed ASAP, since it is the last
2.6.10 kernel we are waiting for. The ia64 kernel is probably also over-due,
since 2.6.8 is a release candidate, and as far as i know, the
kernel-latest-2.6-hppa is just the officialization of the dropping of the 2.4
hppa kernels.

The remaining are just random kernel patches, not part of the actual kernel
packages.

notice that the kernel-patch-suspend2 patch is part of the ubuntu hoary
kernel.

 Using some awk magic I get this table:
 
 kernel-patch-2.4-blooper 1.1
  source all unstable
  11 months Matthew Grant 
 kernel-patch-2.4-pom 20031219-1
  source all unstable
  11 months Matthew Grant 
 
 We already talked about those.
 
 kernel-latest-2.6-hppa 2.6.8-1
  source hppa unstable
  1 month Kyle McMartin 

See, over kill, kernel-latest are packages whose vocation is to be used by d-i
base-installer, which is now frozen, and thus cannot be used here. Which means
we are saddled with no hppa meta-packages for the livetime of sarge as stable
release. The same happened for powerpc.

 debian-kernel managed kernel-image tracker packages seem to be called 
 kernel-image-$ver-$subarch (e.g. kernel-image-2.6-686). Debian should strive 
 to unify this as much as possible. REJECT. No wait, REJECTing this out of 
 hand would lead to a pissed maintainer filling FMs mailbox. FMs are not 
 debian-mentor, just let it rot, perhaps someone can clue him in...

Rejecting those would lead in a pissed kernel maintainer team i would say.

 kernel-patch-suspend2 2.1.8.1-1
 2.1.8-3
  source all experimental
  1 week martin f. krafft #292479
 
 I have already grabbed that one from the repository on martins page since I 
 am 
 desperatly wanting to hibernate my laptop. Well obviously not desperatly 
 enough because I haven't yet fixed the patch for 2.6.10-current which would 
 be needed to get any semblance of ACPI working on this one.
 
 kernel-image-2.6.8-ia64 2.6.8-13
  source ia64 unstable
  3 days Debian Kernel Team 
 kernel-image-2.6.10-sparc 2.6.10-6
  source sparc unstable
  3 days Debian Kernel Team
 
 That leaves two packages which are only three days old. There are 

Sure, these should have been auto-accepted, and this 3 days delay is
unacceptable. I guess the ia64 kernel even contains security fixes, and is a
sarge release candidate, and the 2.6.10 is a sarge fall-back release
candidate.

   Having a website separating the hard cases from the easy ones is the
   first step needed to get a discussion about the rest going.
 
  no, first step is getting a guarantee that the above will be useful and
  accepted, or at least considered by the ftp-masters, or it is just work
  that will be thrown away

Re: NEW handling ...

2005-03-20 Thread Sven Luther
On Sun, Mar 20, 2005 at 02:35:56PM -0800, Steve Langasek wrote:
 On Sun, Mar 20, 2005 at 12:11:07PM +0100, Sven Luther wrote:
  On Fri, Mar 18, 2005 at 05:05:07PM +0100, Joerg Jaspert wrote:
 
And yes, i volunteer to help out NEW handling, if that help is wanted.
 
   Just for the record, not to anyone directly, it just fits here:
   This is not how it works. Offering something randomly and then sitting
   back waiting, later maybe complaining offer wasnt accepted.
 
   The way I got into the ftpteam was simple to do the work.
   ssh to merkel, looking at the changes files if I find a reason for the
   package to go out of new, compiling a list of stuff, feeding it to one
   of the guys who could run lisa on it.
   Done that a few times with some long lists, got some packages out of
   NEW. Now I do it myself...
 
   So, If you want anything to be done: Dont write mails about it how it
   could be done, just do it, anything else is just to be treated as the
   stuff our politicians say...
 
  Easy thing. i tried to help ftp-masters on some case which where important 
  and
  where packages got undully retained in NEW for long time (and these where 
  the
  powerpc kernels, which on two occasions had a 1 month waiting time in the
  past, and the kernel-latest-powerpc metapackages, which will not be used by
  d-i because of the 1.5 month waiting time)
 
   http://lists.debian.org/debian-devel/2005/03/msg00410.html
 
 Lay off the fucking FUD.

Ask Colin. kernel-latest was not used in d-i rc3 because it was 1.5 month in
NEW.

  And do you seriously think that the ftp-master team would have been expanded
  like it has, if the issue would not have aired publicly recently ? I have 
  some
  doubts about it.
 
 That's because you're a twit, and the very reason ftp-masters are afraid

Thanks all the same.

 that taking positive action in the current mailing list climate is seen as
 encouraging that hostile climate.  Thanks for that.
 
   http://lists.debian.org/debian-project/2005/02/msg00213.html

And what do you say of aj denying there is a NEW problem on the debian-vote
threads ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted kbd-chooser 1.11 (powerpc source)

2005-03-20 Thread Sven Luther
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Sun, 20 Mar 2005 18:03:00 +0100
Source: kbd-chooser
Binary: kbd-chooser
Architecture: source powerpc
Version: 1.11
Distribution: unstable
Urgency: high
Maintainer: Debian Install System Team debian-boot@lists.debian.org
Changed-By: Sven Luther [EMAIL PROTECTED]
Description: 
 kbd-chooser - Detect a keyboard and select layout (udeb)
Changes: 
 kbd-chooser (1.11) unstable; urgency=high
 .
   * Sven Luther
  - Modified grep in kbd-chooser.c, so we can now detect if a console=tty0
is present after a console=tty[sS] in /proc/cmdline, so kbd-chooser
doesn't wrongly think we are on a serial console.
   * Frans Pop
 - Delete redundant call to check_if_serial_console in main.
Files: 
 07a478edea10356d94dc725494de800b 785 debian-installer optional 
kbd-chooser_1.11.dsc
 c7402fac1c79d0d43c9bc979f1f52410 68730 debian-installer optional 
kbd-chooser_1.11.tar.gz
 ce7d38757cfb37c5d6d9b19cbfe7f7cf 43244 debian-installer optional 
kbd-chooser_1.11_powerpc.udeb
package-type: udeb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCPa242WTeT3CRQaQRAmQsAJ98lD8WbesXsvkg7lDswy8Z/3O5/ACffogz
d5Nhff1xeGRsDtA3XpMQ1xI=
=thAK
-END PGP SIGNATURE-


Accepted:
kbd-chooser_1.11.dsc
  to pool/main/k/kbd-chooser/kbd-chooser_1.11.dsc
kbd-chooser_1.11.tar.gz
  to pool/main/k/kbd-chooser/kbd-chooser_1.11.tar.gz
kbd-chooser_1.11_powerpc.udeb
  to pool/main/k/kbd-chooser/kbd-chooser_1.11_powerpc.udeb


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted kernel-patch-powerpc-2.6.8 2.6.8-12 (powerpc source)

2005-03-20 Thread Sven Luther
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Sun, 20 Mar 2005 08:03:08 +0100
Source: kernel-patch-powerpc-2.6.8
Binary: kernel-build-2.6.8-power3 kernel-build-2.6.8-power4-smp 
kernel-build-2.6.8-powerpc kernel-image-2.6.8-power4-smp 
kernel-build-2.6.8-powerpc-smp kernel-image-2.6.8-power3 
kernel-build-2.6.8-power3-smp kernel-image-2.6.8-powerpc-smp 
kernel-image-2.6.8-power3-smp kernel-image-2.6.8-power4 
kernel-image-2.6.8-powerpc kernel-headers-2.6.8 kernel-build-2.6.8-power4
Architecture: source powerpc
Version: 2.6.8-12
Distribution: unstable
Urgency: high
Maintainer: Debian Kernel Team debian-kernel@lists.debian.org
Changed-By: Sven Luther [EMAIL PROTECTED]
Description: 
 kernel-build-2.6.8-power3 - build infrastructure for kernel version 
2.6.8-power3
 kernel-build-2.6.8-power3-smp - build infrastructure for kernel version 
2.6.8-power3-smp
 kernel-build-2.6.8-power4 - build infrastructure for kernel version 
2.6.8-power4
 kernel-build-2.6.8-power4-smp - build infrastructure for kernel version 
2.6.8-power4-smp
 kernel-build-2.6.8-powerpc - build infrastructure for kernel version 
2.6.8-powerpc
 kernel-build-2.6.8-powerpc-smp - build infrastructure for kernel version 
2.6.8-powerpc-smp
 kernel-headers-2.6.8 - header files for the Linux kernel version 2.6.8
 kernel-image-2.6.8-power3 - Linux kernel image for 2.6.8-power3
 kernel-image-2.6.8-power3-smp - Linux kernel image for 2.6.8-power3-smp
 kernel-image-2.6.8-power4 - Linux kernel image for 2.6.8-power4
 kernel-image-2.6.8-power4-smp - Linux kernel image for 2.6.8-power4-smp
 kernel-image-2.6.8-powerpc - Linux kernel image for 2.6.8-powerpc
 kernel-image-2.6.8-powerpc-smp - Linux kernel image for 2.6.8-powerpc-smp
Changes: 
 kernel-patch-powerpc-2.6.8 (2.6.8-12) unstable; urgency=high
 .
   * Rebuilding with added kernel-tree magic, as requested by the release
 managers to handle the abi breaking kernel-source-2.6.8-14 upload.
   * Also create the kernel-tree-version file in the documentation which
 mentions against which kernel-tree we did build.
Files: 
 ee07030ae0d538e2f910ebfde64e1d80 1008 devel optional 
kernel-patch-powerpc-2.6.8_2.6.8-12.dsc
 8676dc5643577ae191a8dca586f5d3cd 25138 devel optional 
kernel-patch-powerpc-2.6.8_2.6.8-12.tar.gz
 f5d3b95cdb8e5a54b7f3df99539f4dee 5143838 devel optional 
kernel-headers-2.6.8_2.6.8-12_powerpc.deb
 de53145d8a7999da43154f42c736b874 13533310 base optional 
kernel-image-2.6.8-power3_2.6.8-12_powerpc.deb
 0a28d4b0240e632141d5294ccb45ef0d 405012 devel optional 
kernel-build-2.6.8-power3_2.6.8-12_powerpc.deb
 98dff61ef3fb925386faa8b83a84de5f 13885470 base optional 
kernel-image-2.6.8-power3-smp_2.6.8-12_powerpc.deb
 fc649a6e79705bde5ac38680b09f3b54 404930 devel optional 
kernel-build-2.6.8-power3-smp_2.6.8-12_powerpc.deb
 666ab2a48420239df862f22d83f2dbdf 13522276 base optional 
kernel-image-2.6.8-power4_2.6.8-12_powerpc.deb
 d378eea715a65539a82c27576453ced8 404922 devel optional 
kernel-build-2.6.8-power4_2.6.8-12_powerpc.deb
 a4829f9c21628b1f7b0b1e078793b053 13874490 base optional 
kernel-image-2.6.8-power4-smp_2.6.8-12_powerpc.deb
 260a99517fa0953458f7981aa24f54ce 404898 devel optional 
kernel-build-2.6.8-power4-smp_2.6.8-12_powerpc.deb
 1a8d2748af67f6f2426bec9b4c254bb7 13550724 base optional 
kernel-image-2.6.8-powerpc_2.6.8-12_powerpc.deb
 dfa710c0aac29301485bceb2a686d1f9 405200 devel optional 
kernel-build-2.6.8-powerpc_2.6.8-12_powerpc.deb
 3c90d902acd8314f381924f50ce8bfb1 13805940 base optional 
kernel-image-2.6.8-powerpc-smp_2.6.8-12_powerpc.deb
 97f1a003e8f3ed1b3bad2f7b2dcb90f0 405010 devel optional 
kernel-build-2.6.8-powerpc-smp_2.6.8-12_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCPZ6/2WTeT3CRQaQRAleSAKCeIw1eXMagMj62kiU00TNg5EhKTQCgg3B6
fENgpSZFusE2U6QfKcHvD/Q=
=q0ML
-END PGP SIGNATURE-


Accepted:
kernel-build-2.6.8-power3-smp_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-power3-smp_2.6.8-12_powerpc.deb
kernel-build-2.6.8-power3_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-power3_2.6.8-12_powerpc.deb
kernel-build-2.6.8-power4-smp_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-power4-smp_2.6.8-12_powerpc.deb
kernel-build-2.6.8-power4_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-power4_2.6.8-12_powerpc.deb
kernel-build-2.6.8-powerpc-smp_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-powerpc-smp_2.6.8-12_powerpc.deb
kernel-build-2.6.8-powerpc_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-build-2.6.8-powerpc_2.6.8-12_powerpc.deb
kernel-headers-2.6.8_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-headers-2.6.8_2.6.8-12_powerpc.deb
kernel-image-2.6.8-power3-smp_2.6.8-12_powerpc.deb
  to 
pool/main/k/kernel-patch-powerpc-2.6.8/kernel-image-2.6.8-power3-smp_2.6.8-12_powerpc.deb

Accepted linux-kernel-di-powerpc-2.6 0.79 (powerpc source)

2005-03-20 Thread Sven Luther
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Sun, 20 Mar 2005 18:10:53 +0100
Source: linux-kernel-di-powerpc-2.6
Binary: fat-modules-2.6.8-power4-di ppp-modules-2.6.8-powerpc-di 
scsi-modules-2.6.8-power3-di firewire-core-modules-2.6.8-powerpc-di 
hfs-modules-2.6.8-power4-di scsi-extra-modules-2.6.8-power3-di 
serial-modules-2.6.8-powerpc-di jfs-modules-2.6.8-power4-di 
ext2-modules-2.6.8-powerpc-di firmware-modules-2.6.8-power3-di 
affs-modules-2.6.8-powerpc-di fat-modules-2.6.8-power3-di 
firewire-core-modules-2.6.8-power4-di scsi-core-modules-2.6.8-powerpc-di 
nic-modules-2.6.8-power3-di firmware-modules-2.6.8-powerpc-di 
jfs-modules-2.6.8-powerpc-di loop-modules-2.6.8-power4-di 
ide-modules-2.6.8-power3-di affs-modules-2.6.8-power4-di 
nic-modules-2.6.8-power4-di ide-modules-2.6.8-power4-di 
ide-modules-2.6.8-powerpc-di reiserfs-modules-2.6.8-power4-di 
nic-shared-modules-2.6.8-power4-di floppy-modules-2.6.8-power3-di 
ext3-modules-2.6.8-power4-di scsi-modules-2.6.8-powerpc-di 
nic-extra-modules-2.6.8-power4-di scsi-common-modules-2.6.8-powerpc-di 
socket-modules-2.6.8-powerpc-di nic-pcmcia-modules-2.6.8-
 power4-di nic-shared-modules-2.6.8-power3-di reiserfs-modules-2.6.8-powerpc-di 
md-modules-2.6.8-power4-di scsi-modules-2.6.8-power4-di 
sata-modules-2.6.8-powerpc-di ext2-modules-2.6.8-power4-di 
loop-modules-2.6.8-powerpc-di ppp-modules-2.6.8-power4-di 
xfs-modules-2.6.8-power3-di kernel-image-2.6.8-power4-di 
loop-modules-2.6.8-power3-di nic-pcmcia-modules-2.6.8-powerpc-di 
ufs-modules-2.6.8-power4-di sata-modules-2.6.8-power4-di 
hfs-modules-2.6.8-power3-di usb-modules-2.6.8-power4-di 
serial-modules-2.6.8-power4-di serial-modules-2.6.8-power3-di 
pcmcia-storage-modules-2.6.8-powerpc-di fb-modules-2.6.8-powerpc-di 
input-modules-2.6.8-power4-di ext3-modules-2.6.8-powerpc-di 
usb-storage-modules-2.6.8-power4-di ipv6-modules-2.6.8-powerpc-di 
ipv6-modules-2.6.8-power4-di pcmcia-storage-modules-2.6.8-power3-di 
kernel-image-2.6.8-powerpc-di md-modules-2.6.8-powerpc-di 
nic-modules-2.6.8-powerpc-di fat-modules-2.6.8-powerpc-di 
nic-extra-modules-2.6.8-powerpc-di pcmcia-modules-2.6.8-powerpc-di ufs
 -modules-2.6.8-power3-di scsi-common-modules-2.6.8-power3-di 
pcmcia-modules-2.6.8-power4-di scsi-common-modules-2.6.8-power4-di 
nic-shared-modules-2.6.8-powerpc-di ext2-modules-2.6.8-power3-di 
jfs-modules-2.6.8-power3-di fs-common-modules-2.6.8-power3-di 
kernel-image-2.6.8-power3-di ufs-modules-2.6.8-powerpc-di 
ipv6-modules-2.6.8-power3-di nic-pcmcia-modules-2.6.8-power3-di 
xfs-modules-2.6.8-power4-di irda-modules-2.6.8-power3-di 
affs-modules-2.6.8-power3-di pcmcia-modules-2.6.8-power3-di 
fs-common-modules-2.6.8-powerpc-di firewire-core-modules-2.6.8-power3-di 
floppy-modules-2.6.8-power4-di usb-storage-modules-2.6.8-power3-di 
usb-storage-modules-2.6.8-powerpc-di irda-modules-2.6.8-powerpc-di 
cdrom-core-modules-2.6.8-power4-di scsi-extra-modules-2.6.8-powerpc-di 
scsi-extra-modules-2.6.8-power4-di fb-modules-2.6.8-power3-di 
socket-modules-2.6.8-power4-di ext3-modules-2.6.8-power3-di 
hfs-modules-2.6.8-powerpc-di firmware-modules-2.6.8-power4-di 
sata-modules-2.6.8-power3-di nic-extra-mo
 dules-2.6.8-power3-di cdrom-core-modules-2.6.8-power3-di 
pcmcia-storage-modules-2.6.8-power4-di md-modules-2.6.8-power3-di 
scsi-core-modules-2.6.8-power3-di usb-modules-2.6.8-powerpc-di 
usb-modules-2.6.8-power3-di input-modules-2.6.8-powerpc-di 
xfs-modules-2.6.8-powerpc-di floppy-modules-2.6.8-powerpc-di 
cdrom-core-modules-2.6.8-powerpc-di fs-common-modules-2.6.8-power4-di 
scsi-core-modules-2.6.8-power4-di input-modules-2.6.8-power3-di 
fb-modules-2.6.8-power4-di reiserfs-modules-2.6.8-power3-di 
irda-modules-2.6.8-power4-di ppp-modules-2.6.8-power3-di 
socket-modules-2.6.8-power3-di
Architecture: source powerpc
Version: 0.79
Distribution: unstable
Urgency: high
Maintainer: Debian Install System Team debian-boot@lists.debian.org
Changed-By: Sven Luther [EMAIL PROTECTED]
Description: 
 affs-modules-2.6.8-power3-di - Amiga filesystem support (udeb)
 affs-modules-2.6.8-power4-di - Amiga filesystem support (udeb)
 affs-modules-2.6.8-powerpc-di - Amiga filesystem support (udeb)
 cdrom-core-modules-2.6.8-power3-di - CDROM support (udeb)
 cdrom-core-modules-2.6.8-power4-di - CDROM support (udeb)
 cdrom-core-modules-2.6.8-powerpc-di - CDROM support (udeb)
 ext2-modules-2.6.8-power3-di - EXT2 filesystem support (udeb)
 ext2-modules-2.6.8-power4-di - EXT2 filesystem support (udeb)
 ext2-modules-2.6.8-powerpc-di - EXT2 filesystem support (udeb)
 ext3-modules-2.6.8-power3-di - EXT3 filesystem support (udeb)
 ext3-modules-2.6.8-power4-di - EXT3 filesystem support (udeb)
 ext3-modules-2.6.8-powerpc-di - EXT3 filesystem support (udeb)
 fat-modules-2.6.8-power3-di - FAT filesystem support (udeb)
 fat-modules-2.6.8-power4-di - FAT filesystem support (udeb)
 fat-modules-2.6.8-powerpc-di - FAT filesystem support (udeb)
 fb-modules-2.6.8-power3-di - Frame buffer support (udeb)
 fb-modules-2.6.8-power4-di

Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-18 Thread Sven Luther
On Thu, Mar 17, 2005 at 08:00:45PM +0100, Thiemo Seufer wrote:
  Both of these are plausible; the difference is whether you autobuild
  from unstable or testing.  I would prefer the former, which means your
  former case.
 
 Autobuilding from testing won't work well AFAICS, as it introduces
 another delay until rc-arch bugs are found. Building packages with
 generic RC bugs and ignore them for subtesting seems to be the lesser
 evil.

I disagree here, since building from testing means not duplicating all the job
done in tier-1 testing. My project called for a per-arch override, uploading
directly to arch-unstable, in case of stals and such.

Also, building from testing makes synchronizing with tier-1 testing for an
arch stable release easier.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-18 Thread Sven Luther
On Fri, Mar 18, 2005 at 12:06:08PM +1000, Anthony Towns wrote:
 Daniel Jacobowitz wrote:
 I would really like to see some real use cases for architectures that 
 want this; I'd like to spend my time on things that're actually useful, 
 not random whims people have on lists -- and at the moment, I'm not in a 
 good position to tell the difference for most of the non-release 
 architectures.
 Sure.  The idea is still half-baked.  If it has merit, someone needs to
 finish cooking it...
 
 So, I'd just like to re-emphasise this, because I still haven't seen 
 anything that counts as useful. I'm thinking something like We use s390 
 to host 6231 scientific users on Debian in a manner compatible to the 
 workstations they use; the software we use is ; we rely on having 
 security support from Debian because we need to be on the interweb 2; 
  At the moment, the only use cases I'm confident exist are:
 
   m68k, mips, mipsel, hppa: I've got one in the basement, and I like 
   to brag that I run Debian on it; also I occassionally get some work out 
 of 
 it, but it'd be trivial to replace with i386.
 
   sparc, alpha: We've bought some of these a while ago, they're useful 
 running Debian, we'd really rather not have to stress with switching to 
 i386, but whatever.
 
   arm: We're developing some embedded boxes, that won't run Debian 
 proper, but it's really convenient to have Debian there to bootstrap 
 them trivially.
 
   s390: Hey, it's got spare cycles, why not?
 
 None of those are enough to justify effort maintaining a separate 
 testing-esque suite for them; but surely someone has some better 
 examples they can post...

I think the main reply is for developers using said archs. they need to have a
solid base to work on, and unstable is not it, with large chunks of it being
uninstallable for a longer period of time (and this happens even on powerpc,
which is a faster arch), so killing testing gives the developer (or whoever
uses this arch) no real way of doing a clean install or maintaining a working
setup on these arches, so removing testing is a self-fullfilling prophecy that
they will die soon.

 The questions that need to be answered by the use case are what useful 
 things are being done with the arch and why not just replace this with 
 i386/amd64 hardware when support for sarge is dropped, which won't be 
 for at least 12-18 months (minimum planned etch release cycle) plus 12 
 months (expected support for sarge as oldstable). Knowing why you're 
 using Debian and not another distribution or OS would be interesting too.

But that would not be debian anymore, at that time i wonder if a fork would be
the only solution, and if this x86-centric distribution that will emerge from
it will be fit to keep the debian name.

What percentage of our developers come from alternative arches, and what
amount of work and good quality maintainers will we loose if we kill these
arches ? And can we afford that ?

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-18 Thread Sven Luther
On Thu, Mar 17, 2005 at 11:37:50PM +0100, Matthias Urlichs wrote:
 Hi, Matthew Palmer wrote:
 
  I wonder if we could change Debian's attitude to NEW rejection like has
  happened with NMUs -- that having your package rejected isn't the end of the
  world, it's just something that happens.  So ftpmasters could reject with
  less fear of being taken to the cleaners by random pissed-off maintainers on
  d-devel.
 
 My off-the-wall guess is that most DDs have problems with getting rejected
 because NEW takes a comparatively long time. Thus, after your package is
 stuck in NEW for two weeks, you get told that it's rejected because of a
 minor problem you could fix with a reupload just as easily -- instead,
 it's going to be stuck in NEW for *another* two weeks.

Well, if it was just two weeks, that would be ok, the problem is that the
package will sit in NEW for an undetermined amount of time, possibly forever
even.

 That's (some) developer's view of the situation.
 
 The ftpmaster's view seems to be (I imagine not without some
 justification) that, unless the package is rejected, the average DD will
 never bother to fix it. :-/

So the packages just sits in NEW for a couple month/years or whatever.

And yes, i volunteer to help out NEW handling, if that help is wanted.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-18 Thread Sven Luther
On Fri, Mar 18, 2005 at 08:31:51AM +1100, Matthew Palmer wrote:
 Bollocks.  It's the clever people who usually end up overworked, because
 they can do more critical things with their time.

Apparently you don't know what smileys are for.

 Perhaps you could demonstrate your cleverness by providing ftpmasters with a
 script to automatically check that the debian/copyright file on a package is
 reasonably correct.  Shouldn't be too hard for a clever fellow such as
 yourself.

Like do a diff of it with the GPL, and reject it if it is not ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-18 Thread Sven Luther
On Fri, Mar 18, 2005 at 09:06:10AM +1100, Matthew Palmer wrote:
 On Thu, Mar 17, 2005 at 10:15:50PM +0100, Sven Luther wrote:
  To know in how many packages to split or not to split the packages ? 
 
 That would be one of the things that maintainers have gotten wrong in the
 past, yes.

So ? Mistakes happen, and people learn from them. The current lesson on gets
from this whole mess is to not upload packages which requiere NEW processing.

 There's also been (to my personal knowledge, since I perpetrated examples of
 these crimes) problems with debian/copyright where neither a copy of the
 licence (nor a reference to /usr/share/common-licenses/) under which a
 package was under weren't listed, and also an issue of section.  In each
 case an ftpmaster (known as Cap'n Satan to some people, apparently) politely
 explained the problem to me an helped me to rectify the problem.

Ok, but not-really-new packages are package who where already in the past in
the archive, so this kind of argument is invalid. I mean why reject a package
for such bases just because the documentation was split out, or the library
got a new soname and thus following policy the source package name needs
renaming. They may deserve an RC bug, but not NEW.

 The ftpmasters have also had to deal with blatantly non-free stuff trying to
 be put into main, dangerously patent-encumbered stuff going into main, and
 all forms of bloated and unimpressive stuff floating by.

so what, this is hardly relevant to what we are discussing here.

 For an indication of the sorts of cruft that gets uploaded, take a walk
 through merkel.debian.org:/org/ftp.debian.org/queue/reject/*.reason.  These
 are the ones that got caught automatically -- but if maintainers can have
 these sorts of accidents, I see no reason to believe they'll be any more
 successful stopping other sorts of accidents.

so what, define clear rules on what is or not acceptable for package split
library/kernel renaming due to new version, and automate it.

   Automated NEW is IMO a thing we should never do.
  
  Semi-automated was the proposal, with a delayed acceptance (a week or so)
  where the ftp-masters can take positive action to prevent the automated NEW
  handling. No risk, if a packages is exageratedly splitted, they get the 
  email
  about it, notice it is exageratedly splitted, and veto it, and normal NEW
  behavior follows.
 
 Would you be happy if the ftpmasters put everything on auto-veto if there
 was nobody available to monitor the auto-new queue for a few days?

If the NEW queue handling people can't get the job done, then they should
recruit more people to help out on this instead of making the whole project
suffer from their lack of disponibility.

  We could even imagine an automated analysis, which would differentiate
  unproblematic modifications (a few new packages of moderate size for 
  example),
  or policy-mandated NEW (same packages with just a different ABI version
  number, or a new kernel package), and provide them to ftp-masters via email
  and a keyword in the subject allowing this classification and easy filtering
  of problematic packages.
 
 I can imagine it, but the heuristics would be tricky at best.  But I'm sure
 you'll have a nicely working demo shortly.

we can start with :

  1) packages whose source name did not change. these should be basically ok.
  we check that the amount of binary packages did not grow beyond a given
  threshold = 20-50 % maybe.

  2) packages whose sole modification is the addition or modification of their
  version or soname-number are autoaccepted, and maybe older versions are
  marked as candidates for removal.

This would probably catch most of those 70 packages waiting currently in NEW
and which right now require manual processing and scarcely available
ftp-master time.

  Mmm, i will try to find time to flesh out this proposal and propose code for
  it. Now if the existing code was written in a reasonable language :)
 
 I prefer other people's python to other people's perl.

well, if i adapt code or provide patches, it is preferably in a language i
know. Perl and python are not among those, so ... This is no critic to the
languages chosen, just a maybe inadequacy of myself proposing patches.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debian-ppc64-devel] Call For Help - Please support the ppc64 architecture

2005-03-17 Thread Sven Luther
On Thu, Mar 17, 2005 at 09:46:36AM +1100, Benjamin Herrenschmidt wrote:
 On Wed, 2005-03-16 at 20:27 +0100, Andreas Jochens wrote:
  Hello,
  
  This is a call for help from the 'ppc64' porters. 
  
  On 05-Mar-14 16:14, Martin Michlmayr wrote:
   Also, as with the amd64 port, there is disagreement about the name.
   While ppc64 would be nicer and in line with the LSB, our current
   PowerPC port is called powerpc and therefore it would make more sense
   to call the 64 bit port powerpc64.
  
  There has been a decision of the Debian Technical Committee concerning 
  the name of the amd64 port which basically says that the porting team 
  should decide on the architecture name generally (see [1]).
  
  The ppc64 porters decided to use the name 'ppc64' as the package 
  name a few month ago. 
 
  .../...
 
 It's a fully 64 bits setup as it seems ? That is rather inefficient.
 
 Have we any proper way of doing multiarch setups ? The proper way to
 do ppc64 is to have both archs libs and 32 bits userland for most
 things, as ppc64 native code is slightly slower.
 
 I have repeated that over and over again but it seems I have been
 ignored so far...

Not ignored, there is an effort, fully orthogonal to this pure-64 one, to get
ppc64 biarch going. We are somewhat stopped by the work needed on the sarge
release, but it will happen in the close next time.

Now, there is an interest on IBM's and IBM's customer part for getting ppc64
support, and altough we have access to the augsbourg power5 box (but without
virtual machine, so we can't really do kernel or installer tests), we don't
have those ppc64 machine IBM mentioned could be made available, which makes
work on the kernel and installer part at least less possible.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debian-ppc64-devel] Re: Bug#263743: Call For Help - Please support the ppc64 architecture

2005-03-17 Thread Sven Luther
On Wed, Mar 16, 2005 at 10:24:04PM +, Scott James Remnant wrote:
 On Wed, 2005-03-16 at 23:14 +0100, Andreas Jochens wrote:
 
  On 05-Mar-16 22:01, Scott James Remnant wrote:
   On Wed, 2005-03-16 at 22:48 +0100, Andreas Jochens wrote:
   
   My concern is the same as that of the Project Leader, that the existing
   powerpc port is called powerpc -- and that we should at least try to
   be consistent with already chosen architecture names.
   
  So you would add 'powerpc64' support to dpkg if the port changes its 
  package name accordingly?
  
 Yes, that'd be applied to the 1.13 branch straight away.
 
  However, I still do not understand why you and/or the Project Leader 
  want to override the decision of the porters and choose a different name
  than the LSB specifies. I am not saying that Debian should always follow 
  the LSB blindly, but I cannot see a good reason for deviating from the 
  LSB in this case.
  
 Because it's a 64-bit version of an already supported architecture.
 Having ppc and ppc64 would be fine, as would having powerpc and
 powerpc64.  Having powerpc and ppc64 is inconsistent.

Notice that powerpc used to be called ppc back then (98ish or something such),
and that the name got changed to powerpc64.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: [Debian-ppc64-devel] Re: Bug#263743: Call For Help - Please support the ppc64 architecture

2005-03-17 Thread Sven Luther
On Thu, Mar 17, 2005 at 12:10:59AM +, Scott James Remnant wrote:
 On Thu, 2005-03-17 at 00:31 +0100, Andreas Jochens wrote:
  Moreover, I seriously doubt that this is an honest argument. I think you 
  just want to decide the architecture name yourself.
  
 No, I would just prefer consistency.  You've deliberately chosen an
 architecture name that's jarringly different from your 32-bit variant;
 that's a rather bold thing to do, and I think you need to justify that.

Notice that ppc64 is what is widely known in the outside world on anyone
working with 64bit powerpc, that both the kernel and the toolchain use it,
that all the documentation referent to it uses ppc64 and that the other
distributions doin 64bit powerpc (gento, suze and redhat) use it too, as well
as all cross toolchain out there.

Will we want to do something different as pure dogma, despite the cost
involved ? 

 Obviously I have no power to overrule you on your choice of architecture
 name, but I'd like to try and appeal to some common sense in you, if
 there is any.

Hehe.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-17 Thread Sven Luther
On Thu, Mar 17, 2005 at 02:35:27AM +1100, Matthew Palmer wrote:
 On Wed, Mar 16, 2005 at 03:29:28PM +0100, Sven Luther wrote:
  Ideally we would see forming a little NEW-reviewing comittee which would
  facilitate the job of the ftp-masters. This is also in accordance of the
  small-team proposal in debian.
  
  It would be nice to have the opinion of the ftp-masters on this, if this
  seems credible, and if there are design issues with it.
 
 As far as a NEW-review team, when I raised this about a week ago, aj said
 that you'd effectively be ftpmasters, so why not be an ftpmaster?

Well, it is clear that the ftp-master's job don't scale, they have admitted as
much with both the delays in NEW handling and their participation in the
Vancouver proposal. 

Now the idea was to find some way to help them along, and this may be the
solution to it. Notice that they still have veto right so nothing can get past
them if thet don't want.

Having them take positive action to counter the NEW review team or the
automated scripts may speed things up somewhat though.

/me wonders what the ratio of really-new over not-new NEW packages are anyway.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-17 Thread Sven Luther
On Wed, Mar 16, 2005 at 07:56:58PM +0100, David Schmitt wrote:
 On Wednesday 16 March 2005 19:14, Matthias Urlichs wrote:
  Hi, Matthew Palmer wrote:
   As far as a NEW-review team, when I raised this about a week ago, aj said
   that you'd effectively be ftpmasters, so why not be an ftpmaster?
 
  Umm, no. I presume ftpmaster has other duties. Besides, eyeballing the new
  packages and drafting a list for ftpmaster to cross-read and implement
  isn't the same as implementing the actions directly; I suppose initially
  new ftpmasters would do the former until the old ftpmaster team is
  cnfident the new guys' (gals??) skills are up to the task.
 
 Where I am very surprised that nobody of the very vocal and oh-so-bored 
 maintainers who have nothing else to than waiting for their NEW packages has 
 started an effort to the latter effect: Collecting tidbits of information 
 concerning the various packages rotting in NEW and making that information 
 public.
 
 Without public information, there is no discussion.
 
 Without discussing those things (especially those ftp-masters have generally 
 express their distrust by ignoring them) nothing will happen.

Well, when one sees the response one get to tidbit of information send to the
ftp-masters about one selfs package sitting in NEW, you should understand the
discouragement that such kind of thrown-away work is.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-17 Thread Sven Luther
On Wed, Mar 16, 2005 at 04:36:25PM +0100, Matthias Urlichs wrote:
 - check that the package names are sane, don't conflict, and
   aren't gratuitiously many (a -doc package for 10 kbytes of
   documentation...) (what's the current opinion on that, anyway?)

Don't you think maintainers are big enough to know how to handle this kind of
decisions ? or ask the NEW-team for help before uploading ? If the package
causes problem, it could well be removed from the archive afterward or
something.

Now, again, this is probably something that can get automated, rules are
fixed, and if they are not broken, automated NEW is applied (with a delay as
always), while if they are broken, the reviewed NEW is applied.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-17 Thread Sven Luther
On Thu, Mar 17, 2005 at 06:43:52PM +0100, Matthias Urlichs wrote:
 Hi,
 
 Sven Luther:
  On Wed, Mar 16, 2005 at 04:36:25PM +0100, Matthias Urlichs wrote:
   - check that the package names are sane, don't conflict, and
 aren't gratuitiously many (a -doc package for 10 kbytes of
 documentation...) (what's the current opinion on that, anyway?)
  
  Don't you think maintainers are big enough to know how to handle this kind 
  of
  decisions ? or ask the NEW-team for help before uploading ? If the package
  causes problem, it could well be removed from the archive afterward or
  something.
  
 If you ask the ftpmasters, you'd be surprised...

well, as if they would reply me.

 IMHO it's easier not to let that kind of  cruft get into the archive in
 the first place than it is to clean up afterwards.

But they get a full $DELAY number of days to block it if it is needed.

  Now, again, this is probably something that can get automated
 
 Sure. Some of it.

computers are about automating tasks so humans don't need to handle them. you
only have to be clever enough to tell them to do it and they will do it.

So, you are either overworked or clever, but not both :)

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-17 Thread Sven Luther
On Thu, Mar 17, 2005 at 07:57:11PM +0100, Joerg Jaspert wrote:
 On 10231 March 1977, Sven Luther wrote:
 
  - check that the package names are sane, don't conflict, and
aren't gratuitiously many (a -doc package for 10 kbytes of
documentation...) (what's the current opinion on that, anyway?)
  Don't you think maintainers are big enough to know how to handle this kind 
  of
  decisions ?
 
 NO.
 For many of them this is a clear no. Unfortunately.

To know in how many packages to split or not to split the packages ? 

 Automated NEW is IMO a thing we should never do.

Semi-automated was the proposal, with a delayed acceptance (a week or so)
where the ftp-masters can take positive action to prevent the automated NEW
handling. No risk, if a packages is exageratedly splitted, they get the email
about it, notice it is exageratedly splitted, and veto it, and normal NEW
behavior follows.

We could even imagine an automated analysis, which would differentiate
unproblematic modifications (a few new packages of moderate size for example),
or policy-mandated NEW (same packages with just a different ABI version
number, or a new kernel package), and provide them to ftp-masters via email
and a keyword in the subject allowing this classification and easy filtering
of problematic packages.

Mmm, i will try to find time to flesh out this proposal and propose code for
it. Now if the existing code was written in a reasonable language :)

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Accepted ocaml 3.08.3-0.experimental.1 (powerpc all source)

2005-03-17 Thread Sven Luther
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1

Format: 1.7
Date: Thu, 17 Mar 2005 16:32:30 +0100
Source: ocaml
Binary: ocaml-compiler-libs ocaml-native-compilers ocaml-base-nox ocaml-base 
ocaml ocaml-nox ocaml-interp ocaml-source
Architecture: source powerpc all
Version: 3.08.3-0.experimental.1
Distribution: experimental
Urgency: low
Maintainer: Sven Luther [EMAIL PROTECTED]
Changed-By: Sven Luther [EMAIL PROTECTED]
Description: 
 ocaml  - ML language implementation with a class-based object system
 ocaml-base - Runtime system for ocaml bytecode executables
 ocaml-base-nox - Runtime system for ocaml bytecode executables
 ocaml-compiler-libs - Ocaml interpreter and standard libraries
 ocaml-interp - Ocaml interpreter and standard libraries
 ocaml-native-compilers - Native code compilers of the ocaml suite (the .opt 
ones)
 ocaml-nox  - ML language implementation with a class-based object system
 ocaml-source - Sources for Objective Caml
Closes: 287538
Changes: 
 ocaml (3.08.3-0.experimental.1) experimental; urgency=low
 .
   * New upstream stable point version.
 - breaks binary compatibility, we thus have to up the ABI version
   to 3.08.3.
   * New features
 - ignore unknown warning options for forward and backward compatibility
 - runtime: export caml_compare_unordered (PR#3479)
 - camlp4: install argl.* files (PR#3439)
 - ocamldoc: add -man-section option (Closes: #287538)
 - labltk: add the solid relief option (PR#3343)
 - compiler: ocamlc -i now prints variance annotations
   * Bug fixes:
 - typing: fix unsoundness in type declaration variance inference.
   Type parameters which are constrained must now have an explicit variant
   annotation, otherwise they are invariant. This is not backward
   compatible, so this might break code which either uses subtyping or
   uses the relaxed value restriction (i.e. was not typable before 3.07)
 - typing: erroneous partial match warning for polymorphic variants 
(PR#3424)
 - runtime: handle the case of an empty command line (PR#3409, PR#3444)
 - stdlib: make Sys.executable_name an absolute path in native code 
(PR#3303)
 - runtime: fix memory leak in finalise.c
 - runtime: auto-trigger compaction even if gc is called manually (PR#3392)
 - stdlib: fix segfault in Obj.dup on zero-sized values (PR#3406)
 - camlp4: correct parsing of the $ identifier (PR#3310, PR#3469)
 - autoconf: better checking of SSE2 instructions (PR#3329, PR#3330)
 - graphics: make close_graph close the X display as well as the window 
(PR#3312)
 - num: fix big_int_of_string (empty string) (PR#3483)
 - num: fix big bug on 64-bit architecture (PR#3299)
 - str: better documentation of string_match and string_partial_match 
(PR#3395)
 - unix: fix file descriptor leak in Unix.accept (PR#3423)
 - unix: miscellaneous clean-ups
 - unix: fix documentation of Unix.tm (PR#3341)
 - compiler: fix error message with -pack when .cmi is missing (PR#3028)
 - cygwin: fix problem with compilation of camlheader (PR#3485)
 - stdlib: Filename.basename doesn't return an empty string any more 
(PR#3451)
 - stdlib: better documentation of Open_excl flag (PR#3450)
 - ocamlcp: accept -thread option (PR#3511)
 - ocamldep: handle spaces in file names (PR#3370)
 - compiler: remove spurious warning in pattern-matching on variants 
(PR#3424)
Files: 
 c21eadf8645fd20aea053b8c07b47239 766 devel optional 
ocaml_3.08.3-0.experimental.1.dsc
 fc99e46b3a5e2018e7741dae7a257d56 2484578 devel optional 
ocaml_3.08.3.orig.tar.gz
 e5a55ee181d710734c3574ad3ff2a1f3 42694 devel optional 
ocaml_3.08.3-0.experimental.1.diff.gz
 7ec40689d8ff481e3b0a416f61471ff9 6452346 devel optional 
ocaml-nox_3.08.3-0.experimental.1_powerpc.deb
 68297233346beddd0ee10b983dc36cd7 3101270 devel optional 
ocaml-native-compilers_3.08.3-0.experimental.1_powerpc.deb
 bc8e23b93c41ce824fed33ee149b2006 1820424 devel optional 
ocaml_3.08.3-0.experimental.1_powerpc.deb
 8c24698aa9eb48b3dae460127ed18547 160604 devel optional 
ocaml-base-nox_3.08.3-0.experimental.1_powerpc.deb
 e8f2805d9e69f7bd9f3fdc1e001437f3 67466 devel optional 
ocaml-base_3.08.3-0.experimental.1_powerpc.deb
 db9a9d4241dee5bd6f8a13f0e479fc27 2061978 devel optional 
ocaml-source_3.08.3-0.experimental.1_all.deb
 13b68a7c2690f3a9a88e3aad6b6db73b 934878 devel optional 
ocaml-interp_3.08.3-0.experimental.1_powerpc.deb
 7bbc3803b8daa53f8f4db4b851a64647 840040 devel optional 
ocaml-compiler-libs_3.08.3-0.experimental.1_powerpc.deb

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.0 (GNU/Linux)

iD8DBQFCOfI82WTeT3CRQaQRAj7tAJ9llnWxYclS8ULTifgMlgAsIiwtnwCfdpY5
XnvSVoUQXUmBx96TgNGW6q0=
=tZWW
-END PGP SIGNATURE-


Accepted:
ocaml-base-nox_3.08.3-0.experimental.1_powerpc.deb
  to pool/main/o/ocaml/ocaml-base-nox_3.08.3-0.experimental.1_powerpc.deb
ocaml-base_3.08.3-0.experimental.1_powerpc.deb
  to pool/main/o/ocaml/ocaml-base_3.08.3-0.experimental.1_powerpc.deb

NEW handling ...

2005-03-16 Thread Sven Luther
Hello,

After reading the mention of it in debian-weekly-news, i read with interest :

  
http://kitenet.net/~joey/blog/entry/random_idea_re_new_queue-2005-03-02-21-12.html

And i am not sure to get the hang of it.

You mention that not all packages will be able to do go to this new.debian.org
archive, but that not-really-new packages are good candidates. How would one
decide, is it the maintainer doing the upload who takes the decision ? will
there be an automated check during initial queue processing (for new binary
packages for a same source package for example, or wildcarded packages for
soname changes or kernels) ?

Also, i don't understand what you get more this way, apart from added
bureaucrazy, over simply accepting not-really-new packages out-of hand, since
debian has always considered the maintainer to be a responsible person and the
ultimate decision taker for his packages in general. Is the maintainers
reputation less valable than a random set of DDs having signed the NEW thingy ? 

Maybe we could imagine a automated-but-delayed NEW processing for
not-really-new packages ? The initial queue handling notice that the package
is NEW, but also that it is not really NEW at all, and sends an email to the
maintainer and to the ftp-masters, and if no action is taken after a given
time (7 days ? more flexibility depending on the urgency of the upload ? same
rules as the testing script use ?), if there is no override from the
ftp-masters, the package gets automatically processed.

Friendly,

Sven Luther




-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-16 Thread Sven Luther
On Tue, Mar 15, 2005 at 07:42:24PM -0800, Steve Langasek wrote:
 For that matter, why is it necessary to follow testing on an ongoing basis,
 instead of just building against everything in stable once it's released?

I believe it is best to follow testing, since this allow those arches to start
doing the work for the whole release cycle, and in particular to help fix
arch-specific bugs in the tier-1 archive as soon as they are found.

Doing this once stable has been released will cause a (probably very long for
slower arches) undue delay in being ready for a arch-stable release, and
probably make the source changeset over tier-1 stable bigger than necessary
due to the fact that arch-specific fixes will only be fixed once tier-1 stable
has been released.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-16 Thread Sven Luther
On Tue, Mar 15, 2005 at 11:43:10AM -0800, Steve Langasek wrote:
 On Tue, Mar 15, 2005 at 01:28:15PM +0100, Wouter Verhelst wrote:
  Op ma, 14-03-2005 te 16:09 -0800, schreef Steve Langasek:
   You do know that m68k is the only architecture still carrying around
   2.*2* kernels in sarge?
 
  False. See sparc32.
 
 $ madison -a sparc -s testing -r 'kernel.*2\.2'
 $
 
 ?
 
  Even if it is true that we do still carry 2.2 into sarge, that is only
  for Mac; not for any of the other subarchitectures.
 
 Nevertheless, it is a factor that contributes negatively to the
 maintainability of a stable release...

Well, we could drop mac/m68k supported subarch then ?

For that matter, it would probably make sense to drop 2.4 kernels fully in the
not so far future.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: NEW handling ...

2005-03-16 Thread Sven Luther
On Wed, Mar 16, 2005 at 04:21:56AM -0500, Joey Hess wrote:
 Sven Luther wrote:
  After reading the mention of it in debian-weekly-news, i read with interest 
  :
  

  http://kitenet.net/~joey/blog/entry/random_idea_re_new_queue-2005-03-02-21-12.html
  
  And i am not sure to get the hang of it.
  
  You mention that not all packages will be able to do go to this 
  new.debian.org
  archive, but that not-really-new packages are good candidates. How would one
  decide, is it the maintainer doing the upload who takes the decision ?
 
 Yes of course, the same as a maintainer makes that decision before
 putting a NEW package up on people.d.o right now.
 
  will there be an automated check during initial queue processing (for new 
  binary
  packages for a same source package for example, or wildcarded packages for
  soname changes or kernels) ?
 
 As far as the idea goes, I guess DD's who care about speeding that up
 could automate or semi-automate their advocations for such packages in the
 new queue. (If they weren't throttled.)
 
  Also, i don't understand what you get more this way, apart from added
  bureaucrazy, over simply accepting not-really-new packages out-of hand
 
 My idea is not particularly targeted at that, it was more trying to
 see how we could decentralise the whole new queue processing issue. I
 don't really understand the benefits of requring NEW processing for
 binaries, so I'm not going to try to second-guess the ftp-masters on it.

Ah, so your proposal is fully for really-new-packages, and the only problem
would be for package that contain potentially licence or patent issues, which
makes it impractical in the first place.

A solution for this would be to place all NEW package (which are really-NEW,
the not-really-NEW ones would be automatically processed) in a server outside
the US (so as to not have to care for stupid crypto export rules), and in an
area needing a valid DD signed key to access it. In this way we could claim
they are used for internal review, and not trip any legal problems at all for
illegally distributing them. Then we can base your vote system on this for NEW
packages, again with a week or whatever is judged reasonable delay so the
ftp-master can review and oppose the decision of a certain amount of
developers.

Ideally we would see forming a little NEW-reviewing comittee which would
facilitate the job of the ftp-masters. This is also in accordance of the
small-team proposal in debian.

It would be nice to have the opinion of the ftp-masters on this, if this seems
credible, and if there are design issues with it.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Vancouver hierarchy - proposed terminology

2005-03-16 Thread Sven Luther
On Tue, Mar 15, 2005 at 09:56:03PM +, Henning Makholm wrote:
 The debate is being hard to follow, with tiers, classes of citizenship
 and several other distinctions being tossed about, and not always
 clearly mapped to a particular one of the two divisions in the plan.
 I propose the following terminology (also paraphrasing the outline of
 the plan according to my understanding):
 
 1. A MEMBER architecture is one that the upload queue scripts knows
what to do about. The criteria for being a MEMBER are
  - must provide basic Unix functionality
  - must have a working buildd
  - must have X users, Y of which must be DDs
  - (et cetera)
 
 2. MEMBER architectures are divided into IRREGULAR and REGULAR
architectures. REGULAR architectures make stable releases in
lock-step; thus problems on one REGULAR architecture can block
the release of all others. The release process for REGULAR
architectures is controlled by the DPL-appointed release team,
currently using the testing suite as a common staging area.
The criteria for being REGULAR are
  - must be a MEMBER
  - must have a working installer
  - must have redundant buildd capacity
  - (et cetera)
 
An IRREGULAR architecture either does not make releases, or release
according to a schedule that does not match the REGULAR one. (One
possible instance of this is we'll try to parallel the REGULAR
release, but they are not going to wait for us if we blow a tyre
along the way). The porters must provide their own release
management and staging area (management).

I think it would make sense to distinguish between IRREGULAR and INDEVELOPMENT
architectures here. The IRREGULAR ones being those that want to do a release,
but the release team doesn't care to handle for whatever reason, and the
INDEVELOPMENT ones being those that the porters feel is not ready for a
release.

That said, i just wonder if the solution to this would not simply be for the
release team to simply have one or more assistant in charge of the IRREGULAR
architectures, instead of insisting the porters do the release on their own,
after all these port release management guys are prime candidate to be
promoted to full release manager assistants or whatever later on, as they
already have the right credential for it.

Your proposal also ignores security team's requirement which may be orthogonal
to the release team requirements, as their timeline is fully different
(post-release vs pre-release).

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Mon, Mar 14, 2005 at 05:09:02PM -0800, Steve Langasek wrote:
 On Mon, Mar 14, 2005 at 04:38:35PM -0800, Thomas Bushnell BSG wrote:
  Steve Langasek [EMAIL PROTECTED] writes:
 
   The inclusion of ia64 in the release count is a projection, based on
   where I believe things are today.  Nothing the release team is doing
   ensures that ia64 is going to be a viable port, a year from now when
   we're trying to release etch; and nothing says that one or more of the
   other ports won't be in a position to meet those criteria and get added
   to the release list.
 
  How can they be, since they will be off in another archive?  You can't
  decide now to put an arch in scc and at the same time say you won't
  know whether it's in tier1 or tier2 until etch is close to release.
 
 Please re-read the proposal.  Not all the architectures proposed for
 release with etch are architectures that have enough download share to
 justify keeping them on the primary mirror network; these are
 *separate*, if heirarchically related, requirements.

I think one of the things that disturbs me with this is the lep from the
previous plane of selective arch mirroring, to dropping anything but x86 from
the main archive as seems outlined by the announcement.

I really don't understand why we can't have everything on our main archive (on
ftp-master or whatever), and then from there do some clever trick to mirror
the arches depending on workload or whatever, with a small set of all-tier
mirrors, and most mirrors for going for a reduced set of architectures, with
ftp.arch.debian.org DNS resolving to a revolving list of mirrors handling
the arch or whatever.

So, could you clarify what this plan has as consequence for the developer, the
upload queue, the place where packages are stored ? And explain how you got
from the let's diminish bandwidth requirements of our mirrors by not
mirroring some arches to the let's kick most arches out of our archive to a
second-class archive left to fend for itself that was announced.

 Releasing archs via scc.debian.org (and mirror network) is not an
 obstacle, because scc.debian.org vs. ftp.debian.org is a *mirroring*
 convenience only.  The uploads still all go through
 ftp-master.debian.org, which is where the release action happens.

Ok, that clarifies the above, and is more in touch with what was previously
planned. But why didn't you clearly state that in the announcement ? 

And will mirrors be able to decide which arch they want to mirror ? or will
the set of arches be imposed by debian ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Vancouver meeting - clarifications

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 08:58:44AM +0100, Andreas Barth wrote:
 Hello, world,
 | - the release architecture must have N+1 buildds where N is the number
 |   required to keep up with the volume of uploaded packages
 The reason for this proposal should be instantly clear to everyone who
 ever suffered from buildd backlogs. :)
 
 We want that all unstable packages are directly built under normal
 circumstances and that in the even of a buildd going down the arch does
 not suffer noticably.  The long periods of trying to get some RC-bugfix
 in while some arch has a backlog should disappear with etch.

You mysteriously dropped the N should be = 1 proposal here, which is the one
which generated outcry from the slower arches. Why ? 

 | - the Debian System Administrators (DSA) must be willing to support
 |   debian.org machine(s) of that architecture
 | - there must be a developer-accessible debian.org machine for the
 |   architecture.
 Well, the second point is - I hope - obvious why we want this. This first
 one is just a conclusion of the second.

And what happens if the DSA are not willing to support the architecture but
someone else is ? Do this someone else get invited in the DSA team ? Or as a
DSA assistent for that architecture, with limited or no power on the other
machines of the project ?

 | - the Release Team can veto the architecture's inclusion if they have
 |   overwhelming concerns regarding the architecture's impact on the
 |   release quality or the release cycle length
 This is just more or less an emergency-exit: If we consider an architecture
 really unfit for release, we can use our veto. This is one of the things I
 hope will never happen.

Yes, but it is an all-powerful veto-right, which should be assorted with
adequate counter-limitations to ensure someone will not abuse it in the
future.

 Having said this, this all doesn't exclude the possibility for a
 non-release arch to have some testing which can be (mostly) in sync with
 the release architectures testing - just that if it breaks, the release
 team is not forced anymore to hold the strings together.  For example,
 the amd64-people are doing something like that right now.

Yes, but the proposal doesn't invite for this to happen, nor shows clearly
that this is a wanted thing. 

 I hope that this mail is able to shed some light onto these issues. Please
 accept my apologies for the missing information in the first mail.

Thanks for the clarifications,

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Discussion about tier-2 testing and how to achieve a release of tier-2 arches after all. (Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:23:48PM -0800, Steve Langasek wrote:
 On Mon, Mar 14, 2005 at 10:32:57AM +0100, Sven Luther wrote:
  On Mon, Mar 14, 2005 at 12:23:12AM -0800, Steve Langasek wrote:
   On Sun, Mar 13, 2005 at 11:21:29PM -0800, Thomas Bushnell BSG wrote:
Steve Langasek [EMAIL PROTECTED] writes:
 
 On Sun, Mar 13, 2005 at 10:47:15PM -0800, Thomas Bushnell BSG wrote:
  Steve Langasek [EMAIL PROTECTED] writes:
 
   The sh and hurd-i386 ports don't currently meet the SCC 
   requirements, as
   neither has a running autobuilder or is keeping up with new 
   packages.
 
  It is impossible for any port under development to meet the SCC
  requirements.  We need a place for such ports.  Where will it be?
 
 On the contrary, the amd64 port does, and is currently maintained
 completely outside official debian.org infrastructure.
 
The amd64 port did not always.  Ports under development take time; the
amb64 port is at a late state in its development.  I don't understand
why autobuilding is important to SCC; maybe if you could explain that
I would understand.
 
   The point is that the ftpmasters don't want to play host to various
   ports that *aren't* yet matured to the point of usability, where being
   able to run a buildd is regarded as a key element of usability in the
   port bootstrapping process.  The amd64 team have certainly shown that
   it's possible to get to that point without being distributed from the
   main debian.org mirror network.
 
  I don't really understand that point though, since the plan is to drop 
  mirror
  support for all minor arches, what does it cost to have a 3 level archive
  support : 
 
1) tier 1 arches, fully mirrored and released.
 
2) tier 2 arches, mostly those that we are dropping, maybe mirrored from
scc.debian.org in a secondary mirror network. (why not ftp.debian.org/scc
though ?).
 
3) tier 3 arches, or in development arches, available on
ftp.debian.org/in-devel or something.
 
  I don't see how having the in-devel arches be hosted on alioth instead on 
  the
  official debian ftp server would cause a problem.
 
  Also, i don't understand why scc.debian.org is better than 
  ftp.debian.org/scc,
  really, ideally we could have /debian, /debian-scc, and /debian-devel or
  something such. Is it really a physical problem fro ftp-master to held all
  these roles ? What is it exactly that ftp-masters want to drop all these
  arches for ? 
 
 Nothing in the SCC plan implies a separate dak instance for
 scc.debian.org vs. ftp.debian.org.  On the contrary, since there are
 release architectures that would not be distributed via ftp.debian.org
 under this plan, it is a requirement that all of the architectures in
 question continue to use ftp-master.debian.org for uploads and the dak
 instance.

Ok. This clarification was missing from your original announcement though, and
there are still questions on how tier-2 arches will be able to setup their own
testing support. Taking snapshots from unstable is not a viable solution.

I understand that the testing scripts doesn't scale well, they run once for
each arch, don't they, but moving testing to per-arch archives as i proposed
in my 'build tier-2 arches from testing' proposal, asks the question of how
the communication between the per-arch testing setup and the tier-1 setup
communicate, and also where an arch porter uploads his arch-specific fixes to.

He can either upload to the tier-1 unstable, with the risk of seing the upload
hold by tier-1 testing-hold-up issues and not trickling down to the per-arch
testing setup, or upload to the per-arch unstable archive, with the risk of
the change getting lost when a new tier-1 upgrade comes.

Ideally the fix should be uploaded to both tier-1 unstable and
per-arch-unstable, and a mechanism setup for holding imports from tier-1
testing until the arch-fix is included in it.

Obviously this needs :

  1) cooperation from the tier-1 maintainers to apply the tier-2 arch fix, and
  suitable workarounds for if the maintainer doesn't do its job (NMU allowed
  after a given set of time or if a new upload was done without fixing the
  arch-specfic problem without a good reason).

  2) a way to workaround packages that are kept out of testing for non-tier-2
  related reasons.

  3) discipline on the porters part to do and follow two uploads, and maybe a
  third if 2) happens, thus imposing a higher workload on them than
  necessarily needed.

I don't know, maybe building from testing is not so good an idea after all,
and some building from tier-1 unstable, with a testing script whose additional
criteria would be for the package to be in tier-1 testing.

Altough building from tier-1 testing is the most natural idea if there is hope
of tier-2 stable point release to happening.

 There is no problem for ftp-master to continue filling this role; but it
 already doesn't act as ftp.debian.org -- that role is filled

debian/kernel security issues (Was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Mon, Mar 14, 2005 at 04:51:55PM -0800, Matt Zimmerman wrote:
 On Tue, Mar 15, 2005 at 01:14:30AM +0100, Sven Luther wrote:
 
  On Mon, Mar 14, 2005 at 06:10:30PM -0500, Andres Salomon wrote:
   Yes, I would like to reiterate that coordination between Martin Pitt, the
   Ubuntu kernel team, and the Debian kernel team has been an invaluable
   resource for Debian; there are a lot of security fixes in Debian
   kernels that were brought to my attention by either Fabio or Martin.
  
  Because they are in the security-announce-loop and we are not though, right 
  ? 
 
 Can you restate the question more clearly?  In particular, expand the
 pronouns they and we, and explain what the security-announce-loop is.

There is this vendor-specific-security-announce-with-embargo thingy.

The debian kernel team mostly handles the unstable and testing kernel, is not
in the loop for getting advance advice on those problems, so we cannot build
fixed versions until the vulnerability gets announced, and thus we can't
upload kernels in a timely fashion like ubuntu or other vendors do, who often
have a couple week of advance warnings. On slower arches this could be a
problem.

The debian-security team is handling stable only, and there are no security
updates for unstable until way after the embargo is over, and for testing a
bit after that, depending if the kernels get hinted in or not.

To have proper security-in-testing-or-unstable for the kernel, the
debian-kernel security team, or at least a few members of it, need to be made
aware of the embargoed security holes, and get a chance to fix them in
advance, maybe with a private or security non-public copy of our svn tree
(using svk maybe).

This is not a ubuntu related problem though, and the help the ubuntu
kernel/security team has provided us was invaluable, but it should maybe not
be necessary if the information was not unrightfully hold from us in the first
time.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: debian/kernel security issues (Was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 04:21:21AM -0500, Joey Hess wrote:
 Sven Luther wrote:
  There is this vendor-specific-security-announce-with-embargo thingy.
  
  The debian kernel team mostly handles the unstable and testing kernel, is 
  not
  in the loop for getting advance advice on those problems, so we cannot build
  fixed versions until the vulnerability gets announced, and thus we can't
  upload kernels in a timely fashion like ubuntu or other vendors do, who 
  often
  have a couple week of advance warnings. On slower arches this could be a
  problem.
  
  The debian-security team is handling stable only, and there are no security
  updates for unstable until way after the embargo is over, and for testing a
  bit after that, depending if the kernels get hinted in or not.
  
  To have proper security-in-testing-or-unstable for the kernel, the
  debian-kernel security team, or at least a few members of it, need to be 
  made
  aware of the embargoed security holes, and get a chance to fix them in
  advance, maybe with a private or security non-public copy of our svn tree
  (using svk maybe).
  
  This is not a ubuntu related problem though, and the help the ubuntu
  kernel/security team has provided us was invaluable, but it should maybe not
  be necessary if the information was not unrightfully hold from us in the 
  first
  time.
 
 You seem to be implying that ubuntu is providing you with confidential
 prior warning about kernel security holes, but I really doubt this,

Nope, but i was at one time hinted that i should wait a couple of days before
starting a 12 hours build.

 since many of the ubuntu secutity advisories that I've backchecked
 against the debian kernels have turned out to still be unfixed in the
 kernel teams's svn weeks later.

There is nobody actively doing debian security for unstable kernels right now,
well, not consistently, and not with the kind of advance warning that is
needed. This is rather a disapointement, i believe. But i understand that our
security team doesn't want or can care about unstable/testing security
updates.

 My experience is that the kernel security team is not very quick to fix
 publically known security holes, or to make uploads specifically for
 those holes once they have a fix. Even if we limit it to fixing the
 kernel-source packages and ignore the whole issue of rebuilding
 kernel-image packages for all arches.

No, but it is coming, and should be improved post-sarge hopefully. The
kernel-team has come a long way since Herbert abandoned it for
chinese-internal-political-differences, but there is still no real interaction
between the kernel-team and the security team.

Still, people on the vendor list or whatever have weeks advance knowledge of
those security problems.

Also, as said, post-sarge the rebuild issues will be fixed by a single kernel
package infrastructure, altough i am not sure how our auto-builders will
support that, but we will see. The sarge kernel is mostly frozen anyway so it
is out of our hands.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 01:21:59AM -0800, Steve Langasek wrote:
 On Mon, Mar 14, 2005 at 11:00:12AM +0100, Sven Luther wrote:
 
   There are a few problems with trying to run testing for architectures
   that aren't being kept in sync.  First, if they're not being kept in
   sync, it increases the number of matching source packages that we need
   to keep around (which, as has been discussed, is already a problem);
   second, if you want to update using the testing scripts, you either have
   to run a separate copy of britney for each arch (time consuming,
   resource-intensive) or continue processing it as part of the main
   britney run (we already tread the line in terms of how many archs
   britney can handle, and using a single britney check for archs that
   aren't keeping up doesn't give very good results); and third, if
   failures on non-release archs are not release-critical bugs (which
   they're not), you don't have any sane measure of bugginess for britney
   to use in deciding which packages to keep out.
 
  What about building the scc (or tier 2 as i would say) arches from testing 
  and
  not unstable ? this way you would have the main benefit of testing (no RC
  bugs, no breakage of the day kind of problems). Obviously this means you 
  would
  need some kind of override for per-arch fixes, but preferably these fixes 
  will
  be applied twice, once to the per-arch repo, and then to a new unstable 
  upload
  which fixes the problem. Uploading only to unstable may cause undue delays.
 
 Building against testing instead of against unstable also means you
 don't have any of the controls testing is supposed to provide to protect
 against uninstallable packages: as soon as you build a new library
 package, everything that depends on it is uninstallable.

No, because each arch will have per-arch testing support. It is just a way for
the arches to catch up with testing, and thus being in line for a release, as
when testing gets frozen, those arches will naturally catch up to the frozen
and then released stable version, and be ready for a point release later on.

 This really makes unstable snapshotting, or building stable once it's
 released as Anthony has also suggested in this thread, look like much
 better options than trying to build out of testing.

We all agree that random unstable snapshotting i no good idea, but i disagree
with the stable snapshoting, since this means that the tier-2 arches will only
be able to really start working once stable is released, while at the same
time working for the next stable release, thus introducing a skew of effort.

The proposal i have, altough maybe not perfect, will allow for the arches to
continue working at mostly the same pace as the rest of the project, and thus
not be excluded.

   For these reasons, I think the snapshotting approach is a better option,
   because it puts the package selection choices directly in the hands of
   the porters rather than trying to munge the existing testing scripts
   into something that will make reasonable package selections for you.
 
  So, why don't you do snapshoting for testing ? Do you not think handling all
  those thousands of packages manually without the automated testing thinhy
  would be not an over-burden for those guys ? 
 
  You are really just saying that the testing scipts don't scale well, and
  instead of finding a solution to this, you say let's drop a bunch of 
  architecture,
  and make it another-one's problem.
 
 I think we should discuss various solutions to address the needs of
 porters involved in non-release archs.  I think trying to run a full
 testing infrastructure, or build against testing instead of unstable, is
 unlikely to be a good solution for those porters in practice because of
 some of the issues that I've pointed out.

Could you be more clear about this ? which issues are those ? And how do you
make sure those arches have a stable base to do their daily work on ? And if
testing is not appropriate for them, why don't we drop testing altogether ?
Again this smells at kicking them out and letting them in the cold, altough i
doubt that was your intention.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Building tier-2 against testing (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 11:18:54AM +0100, David Schmitt wrote:
 On Tuesday 15 March 2005 10:41, Sven Luther wrote:
  Could you be more clear about this ? which issues are those ? 
 
 Sven, Steve is referring to the first part of his mail, where he says that 
 building from testing will lose any of the controls testing is supposed to 
 provide to protect against uninstallable packages.

Ah, ok, then i think i have replied to this already.

  And how do 
  you make sure those arches have a stable base to do their daily work on ?
 
 More stable than unstable? As stable as testing? Please explain this to me, I 
 am little slow in the morning.

You know that following unstable, especially for not mainstream arches, mean
random breakage-of-the-day, and such, right ? Also it means that you don't get
the protection against RC bugs and such, which would affect the stability of
your main work plateforms, and make clean installations from scratch often
impossible for large amount of time.

  And if testing is not appropriate for them, why don't we drop testing
  altogether ? 
 
 Off the top of my head I would say because testing was appropriate for a 
 small 
 number of arches but didn't appropriately scale for a bigger number of arches 
 where the probability of breakage on any single one of them approaches one.

Yes, but the thing is that the two-distribution approach, one for uploading
packages, the other for having a base for proven-good packages is useful and
even needed. This is currently denied the arch porters.

Furthermore the fact to have a testing which follows and mirrors the tier-1
testing is vital to allow for stable point releases. rejecting tier-2 testing
support based on tier-1 testing kills in the egg any chance of a stable point
release later on.

 Also testing is absolutely needed to be able to properly support stable after 
 the release: this needs synced arches, else security updates would need to 
 recompile several different minor diverging versions each time.

No i don't think this holds. Once stable is released, it is totally divorced
from testing, and only gets interaction through stable-proposed-updates.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Security support for tier-2 (was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 12:22:34PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 17:18, Sven Luther wrote:
  On Mon, Mar 14, 2005 at 11:12:29AM -0500, David Nusinow wrote:
   On Mon, Mar 14, 2005 at 09:54:49AM -0600, John Goerzen wrote:
It is not unstable that I am (most) worried about.
   
It is the lack of any possibility of a stable release that concerns me.
Even if the people for a given arch were to build a stable etch, it
would have no home in Debian, would suffer from being out of the loop
on security updates, etc.
  
   Well, we do know the security team needs help. What I'd love to see is
   each port have someone on the security team to handle their specific
   bugs, binary builds and testing. That might scale better and decrease the
   overall load on the team. This is all in line with what seems to be the
   central thesis of the proposal: shift more of the core burden to the
   porters. Of course, this does demand a lot, but the burden has to go
   somewhere, and the people currently carrying large portions of it are
   saying they can't do this any more.
 
  Notice too that the exact same people whose help is needed are those that
  are pissed by this proposal, and whose help has been repeteadly rejected in
  the past.
 
 
 Sven, is there a specific reason you believe that the proposers will 
 prevent[1] security-support on scc.d.o for tier-2 arches or are you only 
 ranting?

Because of [1], because they said they will drop security on tier-2 arches and
that porters should be left to fend by themselves, did they not ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 10:51:24AM +0100, Julien BLACHE wrote:
 -BEGIN PGP SIGNED MESSAGE-
 Hash: SHA1
 
 Steve Langasek [EMAIL PROTECTED] wrote:
 
  And keeping IA64 in the loop is just another joke from the release
  team. It'd be interesting to find out, but I bet more m68ks were sold
  than IA64 last year.
 
  Which of these two architectures are you more likely to be able to run a
  current 2.6 kernel on, BTW?
 
 I fail to see why this matters at all. It's not in your list of
 requirements, remember ?
 
  You do know that m68k is the only architecture still carrying around
  2.*2* kernels in sarge?
 
 Yes. But there are 2.4 kernels available too, don't forget to mention
 that fact. No 2.6, though, but that's not a problem right now. Might
 become a problem for etch, I agree.

There is 2.6 work on m68k, just not all subarches are ready for it though.

 m68k folks, is there anything in the works for 2.6 ?

Yep, runs since a couple of month last time i was informed for that, at least
on the amiga arch.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 12:30:59PM +0100, Ingo Juergensmann wrote:
 On Tue, Mar 15, 2005 at 10:51:24AM +0100, Julien BLACHE wrote:
 
   You do know that m68k is the only architecture still carrying around
   2.*2* kernels in sarge?
  Yes. But there are 2.4 kernels available too, don't forget to mention
  that fact. No 2.6, though, but that's not a problem right now. Might
  become a problem for etch, I agree.
  m68k folks, is there anything in the works for 2.6 ?
 
 To my knowledge there are even buildds running 2.6 on m68k. 
 Even more: 
 I took just another piece of m68k hardware, which Debian bought for the m68k
 port, to Roman Zippel on March, 3rd in order to let him write the needed
 drivers for that accelerator card. So, there will even be new drivers for
 m68k soon that will be made for 2.6 kernel series, I think. 
 
 With the new proposal of de facto dropping m68k support, I'm this -- close
 to recommend to Roman, that he better should invest his time into other
 projects, because Debian wouldn't appreciate his work to bring up another
 public m68k machine. 

Notice that m68k doesn't actively participate in the kernel-team, and package
their stuff in their own corner though, which may be the reason for this
perceived problem.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: ports.debian.org (Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 11:47:37AM +, Colin Watson wrote:
 On Mon, Mar 14, 2005 at 05:38:30PM +0100, Sven Luther wrote:
  I have proposed tier-1 ports for the main arches, tier-2 ports for the other
  ready ports but dropped from official support, and tier-3 ports for
  in-development ports.
 
 My problem with that is that I think we (and more importantly, our
 users) would always have to look up what these numbers meant. Using
 words instead of numbers would be preferable. Furthermore your tiers

Well, the user don't care, they point their apt sources at :

  ftp.arch.debian.org

and everything works well for them, given the warning about possibly delays in
security updates (altough the nonexistence of security.arch.debian.org
should hint them to that anyway).

 don't match the Vancouver proposal, in which there would be

I don't care about the Vancounver proposal all that much, since its basic
premise are the dropping of the minority arches, that is no stable release, no
testing infrastructure, no security updates. If that is not what was meant,
then a new announcement clarifying the points is in order.

 architectures that would be released and officially supported but not
 distributed from ftp.d.o.

I think the vancouver proposal, or at least its announcement was profundly
lacking in clarity, and didn't clearly separate the three different problems
and their solutions :

  1) mirror bandwidth and space issues.

  2) release management and autobuilders.

  3) security update for stable releases.

Am i not right in thinking that those where the key issues, and you that was
there, do you not think it would be welcome of the vancouver-cabal :) to
clarify their remedies for each of those points separatedly ? 

 The fundamental idea I'm trying to capture is less popular or
 minority interest or something, but I can't think of a way to do that
 that (a) doesn't sound offensive and (b) isn't incredibly wordy. ports
 is the best I've heard so far.

well, it probably sounds offensive because the proposed solution is offensive
in the first place, isn't it ? 

And ports, well, its all nice, but how do you measre inegality in port
treatment then ? will we have scp or something such ?

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 01:41:01PM +0100, Ingo Juergensmann wrote:
 On Tue, Mar 15, 2005 at 12:59:43PM +0100, Sven Luther wrote:
 
   With the new proposal of de facto dropping m68k support, I'm this -- 
   close
   to recommend to Roman, that he better should invest his time into other
   projects, because Debian wouldn't appreciate his work to bring up another
   public m68k machine. 
  Notice that m68k doesn't actively participate in the kernel-team, and 
  package
  their stuff in their own corner though, which may be the reason for this
  perceived problem.
 
 Has the kernel team made any advances to the m68k kernel team for a closer
 cooperation? Or did they just yelled Hey! We are now taking over the kernel
 development, no matter if more capable people are outside of the project!?

I sent an email to cts, and discussed some with him. I don't remember the
details, and i didn't approach this again when i meet him in oldenbourg.

Participating in the kernel-team mostly means moving the
kernel-image|patch|whatever packages into the kernel svn archive though, and
maybe this was the stopping step.

That said, once we move to one kernel package only for all arches (for a given
version), more active participation is needed. Individual patchset and config
file handling will still be in the hands of the individual arch maintainers.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 09:22:06AM -0800, Matt Zimmerman wrote:
 On Tue, Mar 15, 2005 at 11:04:09AM +0100, Julien BLACHE wrote:
  And now you're pointing us to the Ubuntu website, but it's a bit
  late.
 
 As soon as a proper website was up and running, the URL in the announcement
 above became a redirect to the more comprehensive site.

I heard about it through slashdot the first time though :).

Friendly,

Sven Luther



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: debian/kernel security issues (Was: Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-15 Thread Sven Luther
On Tue, Mar 15, 2005 at 08:51:30AM -0800, Matt Zimmerman wrote:
 On Tue, Mar 15, 2005 at 09:50:22AM +0100, Sven Luther wrote:
 
  On Mon, Mar 14, 2005 at 04:51:55PM -0800, Matt Zimmerman wrote:
   On Tue, Mar 15, 2005 at 01:14:30AM +0100, Sven Luther wrote:
   
On Mon, Mar 14, 2005 at 06:10:30PM -0500, Andres Salomon wrote:
 Yes, I would like to reiterate that coordination between Martin Pitt, 
 the
 Ubuntu kernel team, and the Debian kernel team has been an invaluable
 resource for Debian; there are a lot of security fixes in Debian
 kernels that were brought to my attention by either Fabio or Martin.

Because they are in the security-announce-loop and we are not though, 
right ? 
   
   Can you restate the question more clearly?  In particular, expand the
   pronouns they and we, and explain what the security-announce-loop is.
  
  There is this vendor-specific-security-announce-with-embargo thingy.
 
 ...which is the subject of a lot of unfounded speculation by those who are
 not familiar with the process.
 
  To have proper security-in-testing-or-unstable for the kernel, the
  debian-kernel security team, or at least a few members of it, need to be 
  made
  aware of the embargoed security holes, and get a chance to fix them in
  advance, maybe with a private or security non-public copy of our svn tree
  (using svk maybe).
 
 Herbert Xu used to fill this role.  After he resigned, William Lee Irwin (I
 believe) volunteered to be the point of contact for security issues.  If
 William is not active in this role, the kernel team should nominate someone
 else who can be trusted by the security team to work on sensitive issues,
 and have them contact the security team.
 
  This is not a ubuntu related problem though, and the help the ubuntu
  kernel/security team has provided us was invaluable, but it should maybe not
  be necessary if the information was not unrightfully hold from us in the 
  first
  time.
 
 This problem has nothing whatsoever to do with Ubuntu, and I appreciate you
 retracting this implication.  Whether you believe in coordinated disclosure
 is equally irrelevant; the terms of such information is set by the rightful
 party (e.g., the person who discovered it), and to violate those terms would
 represent a breach of trust.

I never made any such implication, not even sure what implication you are
speaking about here. I only mentioned that the current kernel team has no
access to the vendor-sec stuff, and as such it is logical that the help flows
from ubuntu (who has access to it, right ?) since the ubuntu kernel team has a
couple of weeks advance notice of the problems. Other problems also flow the
other way around though.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits from the CD team, 2005-03-16

2005-03-15 Thread Sven Luther
On Wed, Mar 16, 2005 at 01:27:37AM +, Steve McIntyre wrote:
 Thus, for sarge, we plan to offer officially:
 
  * ISO images for business card and netinst CDs (for all architectures)
  * ISO images for normal install CDs (for all architectures)
  * ISO images for install DVDs (i386-only planned)

May i ask for powerpc DVD images too ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
 The much larger consequence of this meeting, however, has been the
 crafting of a prospective release plan for etch.  The release team and
 the ftpmasters are mutually agreed that it is not sustainable to
 continue making coordinated releases for as many architectures as sarge
 currently contains, let alone for as many new proposed architectures as
 are waiting in the wings.  The reality is that keeping eleven
 architectures in a releasable state has been a major source of work for
 the release team, the d-i team, and the kernel team over the past year;
 not to mention the time spent by the DSA/buildd admins and the security
 team.  It's also not clear how much benefit there is from doing stable
 releases for all of these architectures, because they aren't necessarily
 useful to the communities surrounding those ports.

Will minutes of the meeting be made available ? This is a rather drastic step
to be taken, and it comes as rather a big surprise, so i believe that full
minutes of the meeting having taken that decision should be in order. I also
wonder about the absense of porter representative in said meeting and said
taking of decision, not mentioning the rest of the project. Speak about
communication and transparency.

I don't think this will be a good PR move though, we will probably see a
slashdot thread about Debian dropping all architectures except the main 4 or
something such soon.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 12:23:12AM -0800, Steve Langasek wrote:
 On Sun, Mar 13, 2005 at 11:21:29PM -0800, Thomas Bushnell BSG wrote:
  Steve Langasek [EMAIL PROTECTED] writes:
 
   On Sun, Mar 13, 2005 at 10:47:15PM -0800, Thomas Bushnell BSG wrote:
Steve Langasek [EMAIL PROTECTED] writes:
 
 The sh and hurd-i386 ports don't currently meet the SCC requirements, 
 as
 neither has a running autobuilder or is keeping up with new packages.
 
It is impossible for any port under development to meet the SCC
requirements.  We need a place for such ports.  Where will it be?
 
   On the contrary, the amd64 port does, and is currently maintained
   completely outside official debian.org infrastructure.
 
  The amd64 port did not always.  Ports under development take time; the
  amb64 port is at a late state in its development.  I don't understand
  why autobuilding is important to SCC; maybe if you could explain that
  I would understand.
 
 The point is that the ftpmasters don't want to play host to various
 ports that *aren't* yet matured to the point of usability, where being
 able to run a buildd is regarded as a key element of usability in the
 port bootstrapping process.  The amd64 team have certainly shown that
 it's possible to get to that point without being distributed from the
 main debian.org mirror network.

I don't really understand that point though, since the plan is to drop mirror
support for all minor arches, what does it cost to have a 3 level archive
support : 

  1) tier 1 arches, fully mirrored and released.

  2) tier 2 arches, mostly those that we are dropping, maybe mirrored from
  scc.debian.org in a secondary mirror network. (why not ftp.debian.org/scc
  though ?).

  3) tier 3 arches, or in development arches, available on
  ftp.debian.org/in-devel or something.

I don't see how having the in-devel arches be hosted on alioth instead on the
official debian ftp server would cause a problem.

Also, i don't understand why scc.debian.org is better than ftp.debian.org/scc,
really, ideally we could have /debian, /debian-scc, and /debian-devel or
something such. Is it really a physical problem fro ftp-master to held all
these roles ? What is it exactly that ftp-masters want to drop all these
arches for ? 

Mirrors could then chose to go with 1) only (most of them will), or also
mirror 2) and/or 3).

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Sun, Mar 13, 2005 at 11:36:47PM -0800, Steve Langasek wrote:
 For the specific case of sparc, it's worth noting that this architecture
 was tied for last place (with arm) in terms of getting the ABI-changing
 security updates for the kernel; it took over 2 months to get all
 kernel-image packages updated for both 2.4 and 2.6 (which is a fairly
 realistic scenario, since woody also shipped with both 2.2 and 2.4),
 which is just way too unresponsive.  The call for sparc/arm kernel folks
 in the last release update was intended to address this; correct me if
 I'm wrong, but to my knowledge, no one else has stepped forward to help
 the kernel team manage the sparc kernels.

Notice that post-sarge, we will have one kernel-package only which will build
for all arches, like the ubuntu guys do, and that this problem with kernel
security updates may go away for all arches.

BTW, how much of the human intervention needed for buildd signing plays in the
delays you see, and did you discuss the possibiliity of a fully autobuilder
setup, like ubuntu does and i advocated years ago ? 

 Well, sparc is not in any danger of being dropped from SCC. :)  As I
 said, none of the current sarge candidate architectures are.

But there will be no stable scc release, if i understand the plane well ? Is
this really what you wanted or did i misunderstood ? What would be the problem
on adding scc stable release for later point releases ? We did that already in
the past if i remembed well.

  In general I would like to say that supporting a lot of architectures was
  an important difference between Debian and other distributions.  I know the
  drawbacks of this but I just do not want to hide my opinion that I do not
  like the idea of reducing the number of supported architectures that 
  drastical.
  IMHO the effect would be that people will start forking Debian for porting
  issues and we will loose the power of those porters while they will spend
  time for things they would not have to do else.
 
 I certainly agree that portability is one of Debian's selling points,
 and I also have a pet architecture that doesn't appear to make the
 cut for etch; but I don't think it's a coincidence that the release
 cycle got longer when we doubled the arch count between potato and
 woody, and I *know* it's not a coincidence that we have a long release
 cycle for sarge while trying to wrangle those same 11 architectures.

He, and most of the DPL candidate just posted that they didn't believe the
release delay will drop if we drop arches. I also remember that ia64 was one
of the most problematic to autobuild the ocaml packages in august or so, and
it was worse off than m68k, mips or arm if i remember well.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:10:32AM +0100, Ingo Juergensmann wrote:
 On Mon, Mar 14, 2005 at 07:37:51AM +0100, Andreas Tille wrote:
 
  In general I would like to say that supporting a lot of architectures was
  an important difference between Debian and other distributions.  I know the
 
 In fact it was one of the 2 main reasons for my choice. apt-get was the other
 main reason. 
 
  drawbacks of this but I just do not want to hide my opinion that I do not
  like the idea of reducing the number of supported architectures that 
  drastical.
  IMHO the effect would be that people will start forking Debian for porting
  issues and we will loose the power of those porters while they will spend
  time for things they would not have to do else.
 
 And the userbase will get smaller. It's not unlikey that I will power off
 all non-supported archs then and migrate to another distribution that has
 not those internal problems like Debian. And I think that I won't be the
 only one.
 IMHO scc.d.o will result in focussing on those archs, making it worse and
 worse for the other archs. Implementing scc.d.o is equally to dropping those
 older archs in my eyes. It's just another wording. 

Notice, that there is really a unclarity of what the problem is, and the
wording of the announcement really didn't help on this.

If the main problem is the mirror network, i think it does make sense to drop
some arches from the mirror network. After all, ir could well be that for some
arches we have more mirrors than users, and a single, or smaller group of
mirrors for those arches could well be worth it. It could probably be done at
the DNS level that ftp.arch.debian.org automagically points to the right
place and such, and should be transparent.

But again, i feel that the announcement was one thing, but that it lacks much
information about the reason which pushed the decision, and the individual
technical problems to be overcome. Are the minutes of the release-team meeting
publically available ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 01:14:47AM -0800, Steve Langasek wrote:
 On Mon, Mar 14, 2005 at 09:09:09AM +0100, Adrian von Bidder wrote:
  On Monday 14 March 2005 05.45, Steve Langasek wrote:
   Architectures that are no longer being considered for stable releases
   are not going to be left out in the cold.  The SCC infrastructure is
   intended as a long-term option for these other architectures, and the
   ftpmasters also intend to provide porter teams with the option of
   releasing periodic (or not-so-periodic) per-architecture snapshots of
   unstable.
 
  [I'm a pure x86 user atm, so if this is a non-issue, I'll gladly be 
  educated]
 
  Why only unstable?  In other words: will it be possible for scc arches to 
  have a testing distribution?  Obviously, this testing/arch will not 
  influence the release candidate arch testing, but will allow real releases 
  of scc-arches if a RM/release team steps up.
 
 (A popular question...)
 
 There are a few problems with trying to run testing for architectures
 that aren't being kept in sync.  First, if they're not being kept in
 sync, it increases the number of matching source packages that we need
 to keep around (which, as has been discussed, is already a problem);
 second, if you want to update using the testing scripts, you either have
 to run a separate copy of britney for each arch (time consuming,
 resource-intensive) or continue processing it as part of the main
 britney run (we already tread the line in terms of how many archs
 britney can handle, and using a single britney check for archs that
 aren't keeping up doesn't give very good results); and third, if
 failures on non-release archs are not release-critical bugs (which
 they're not), you don't have any sane measure of bugginess for britney
 to use in deciding which packages to keep out.

What about building the scc (or tier 2 as i would say) arches from testing and
not unstable ? this way you would have the main benefit of testing (no RC
bugs, no breakage of the day kind of problems). Obviously this means you would
need some kind of override for per-arch fixes, but preferably these fixes will
be applied twice, once to the per-arch repo, and then to a new unstable upload
which fixes the problem. Uploading only to unstable may cause undue delays.

Ideally, all-source-packages-in-svn or whatever and
source-only-full-autobuilds would help greatly with this. Allowing to do a
branch of the repo to upload to the per-arch archive, and have the fix
automatically picked up by the next unstable release.

 For these reasons, I think the snapshotting approach is a better option,
 because it puts the package selection choices directly in the hands of
 the porters rather than trying to munge the existing testing scripts
 into something that will make reasonable package selections for you.

So, why don't you do snapshoting for testing ? Do you not think handling all
those thousands of packages manually without the automated testing thinhy
would be not an over-burden for those guys ? 

You are really just saying that the testing scipts don't scale well, and
instead of finding a solution to this, you say let's drop a bunch of 
architecture,
and make it another-one's problem.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:49:20AM +0100, Thijs Kinkhorst wrote:
 On Mon, March 14, 2005 10:10, Ingo Juergensmann said:
  It would be better when the project would be honest and state that it want
  to become a x86-compatible only distribution (with the small tribute to
  powerpc users) than this braindead thingie.
 
 The problems associated with carrying many archs have been well
 demonstrated. This proposal is a way to address these problems. If you
 want to keep all archs as a part of the central architecture, you have to
 come up with a way to tackle the given problems (and not just shout that
 you want to keep them - just continuing without changing anything is not
 realistic). If you disagree, please come up with an alternative plan
 yourself (preferably a worked out plan like this one).

The main problem with this proposal is that it vaguely hints at a couple of
technical details, hide the discussion that happened about it, and then lists
a huge list of strong remedies without analysing the detailed problems one by
one, and without stating what problems the remedy do aim to fix, thus making
help coming from porters and other third parties more difficult to focus.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 01:59:02AM -0800, Steve Langasek wrote:
 On Mon, Mar 14, 2005 at 09:31:44AM +0100, Romain Francoise wrote:
  Steve Langasek [EMAIL PROTECTED] writes:
 
   The following people in Debian leadership roles have also expressed
   their support:
 
 Andreas Schuldei (DPL candidate)
 Angus Lees (DPL candidate)
 Branden Robinson (DPL candidate)
 Jonathan Walther (DPL candidate)
 
  How exactly is DPL candidate a leadership role?  I can understand that
  the aforementioned people are under the spotlights right now because of
  the election, but it does not qualify as leadership.
 
  Also, our current DPL isn't listed in the supporters of this plan, does
  it mean that he wasn't consulted?  Or does it mean that he was
  consulted, but disagreed?  If so, may I ask why?
 
 Yes, we did consult Martin as well, and he had some concerns about the
 proposal that prevented him from signing on to it as-is.  That's fine;
 and many of those concerns are being discussed now in this thread.  I
 felt a little silly listing the DPL candidates as supporting it without
 also listing the current DPL, but we'd already asked the DPL candidates
 about it before Martin got back to us, so I was bound to feel a little
 silly either way. ;)

And how do you reconcile the fact that most of those told us recently on
debian-vote that they believed that dropping an architecture will not help
with the delay of the release ? And giving the times of the posts, they
probably knew about this plan previously to replying that, especially those of
the scud team. Pure demagogy then ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:39:24AM +0100, Robert Millan wrote:
 On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
  
  To be eligible for inclusion in the archive at all, even in the
  (unstable-only) SCC archive, ftpmasters have specified the following
  architecture requirements:
  
  [...]
  
  - binary packages must be built from the unmodified Debian source
(required, among other reasons, for license compliance)
 
 Is this a simple sanity requirement (i.e. no hacked crap being uploaded to the
 archive), or does it imply that all packages in base (or base + 
 build-essential)
 need to be buildable from unmodified source?
 
  - the port must demonstrate that they have at least 50 users
 
 How do you demonstrate that?  Via popularity-contest?

But then, popularity-contest installation per default was dropped for
debian-installer rc3, so ...

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: mplayer 1.0pre6a-4 for i386 and PowerPC and sparc

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:40:55AM +0100, A Mennucc wrote:
 you may find  source, i386 and powerpc and sparc binaries
 of mplayer  1.0pre6a-4  in your friendly repository
 
 http://tonelli.sns.it/pub/mplayer/sarge
 
 with special thanks to  David Moreno Garza  for the sparc binary
 and Simon McVittie for powerpc

BTW, what is the current status of inclusion of this in debian ? It seems that
even ubuntu now carries a mplayer binary, so there is really no reason at all
for not having one in debian. Especially as many of those who opposed mplayer
back then are now working for ubuntu.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 02:12:48AM -0800, Thomas Bushnell BSG wrote:
 Sven Luther [EMAIL PROTECTED] writes:
 
  BTW, how much of the human intervention needed for buildd signing
  plays in the delays you see, and did you discuss the possibiliity of
  a fully autobuilder setup, like ubuntu does and i advocated years
  ago ?
 
 I can't answer for Steve, but it seems to me that signing isn't the
 problem.  There is not a big delay in getting packages signed; the
 delay is much more frequently getting them actually built.

Well, i will disagree. It is only needed for the signer to go out to
vacations, and uploads break for a couple of days or weeks, which immediately
break builds of packages dependent on said packages, and cause delay in the
build queue, especially on slower arches. This is not theoretical, it happened
to me already in the past.

 Where human delay did come into play was in getting the xfree86 mess
 cleaned; in theory it should have taken one or two days, but in
 practice it took much longer.

Why not fully eliminate the human factor ? Ubuntu does automated build from
source only uploads, the package sources are built and signed by a developer,
autobuilt on all arches, and i don't believe they are individually signed
after that.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 10:16:20AM +, Martin Michlmayr wrote:
 * Aurélien Jarno [EMAIL PROTECTED] [2005-03-14 10:56]:
  Would it be possible to have a list of such proposed architectures?
 
 amd64, s390z, powerpc64, netbsd-i386 and other variants, sh3/sh4, m32r

ppc64 is not currently a candidate for a separate arch, the path to go is a
biarch solution, much like the current amd64 solution in sarge, as word from
the main ppc64 developers is that a pure64 solution will be a measurable
performance hit on power (unlike amd64, which is saddled with a weaker
instruction set to start with).

One could add per-subarch optimized builds and mirrors too though. 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:26:07AM +0100, Andreas Barth wrote:
 * Sven Luther ([EMAIL PROTECTED]) [050314 11:20]:
  On Mon, Mar 14, 2005 at 10:39:24AM +0100, Robert Millan wrote:
   On Sun, Mar 13, 2005 at 08:45:09PM -0800, Steve Langasek wrote:
- the port must demonstrate that they have at least 50 users
 
   How do you demonstrate that?  Via popularity-contest?
  
  But then, popularity-contest installation per default was dropped for
  debian-installer rc3, so ...
 
 We don't say it needs 50 entries in popularity-contest, but just: 50
 users. How this is demonstrated, may be different. Enough traffic on the
 porter list might also be enough - or enough bug reports coming from
 that arch. Or whatever. I don't expect that to be the blocking critieria
 for any arch.

Well, but wouldn't reenabling the popularity-contest by default for sarge help
a lot on that ? 

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: mplayer 1.0pre6a-4 for i386 and PowerPC and sparc

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:26:27AM +0100, A Mennucc wrote:
 On Mon, Mar 14, 2005 at 11:09:27AM +0100, Sven Luther wrote:
  On Mon, Mar 14, 2005 at 10:40:55AM +0100, A Mennucc wrote:
   you may find  source, i386 and powerpc and sparc binaries
   of mplayer  1.0pre6a-4  in your friendly repository
   
   http://tonelli.sns.it/pub/mplayer/sarge
   
   with special thanks to  David Moreno Garza  for the sparc binary
   and Simon McVittie for powerpc
  
 BTW, what is the current status of inclusion of this in debian ? It seems 
 that
  even ubuntu now carries a mplayer binary, so there is really no reason at 
  all
  for not having one in debian. Especially as many of those who opposed 
  mplayer
  back then are now working for ubuntu.
 
 
 I am waiting from anybody in the ftp master team to say anything

Like said, since ubuntu has mplayer, there is really no reason to stale it for
debian now.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:28:08AM +0100, Andreas Schuldei wrote:
 On Mon, Mar 14, 2005 at 11:05:16AM +0100, Sven Luther wrote:
  And how do you reconcile the fact that most of those told us recently on
  debian-vote that they believed that dropping an architecture will not help
  with the delay of the release ? And giving the times of the posts, they
  probably knew about this plan previously to replying that, especially those 
  of
  the scud team. Pure demagogy then ? 
 
 To my best knowledge Branden did not know about the proposal at
 the time of the LWN interview. So from him it was no demagogy but
 his own honest, private oppinion. I and AJ knew about it since we
 were involved in the meeting. We both sidestepped the question
 since the proposal was still not ready at that time.

Ok.

 btw: the results of that interview were not posted to -vote.

Maybe it should have at that.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:38:57AM +0100, Andreas Barth wrote:
 * Sven Luther ([EMAIL PROTECTED]) [050314 11:35]:
  Well, but wouldn't reenabling the popularity-contest by default for sarge 
  help
  a lot on that ? 
 
 There was a technical reason why it was removed - more or less, if you

Yes, it asked one question during the install, wasn't it ? One potentially
confusing question to the poor user.

 want this changed, you need to submit a patch (but it might be too late
 now, as RC3 should be more or less building).

It was too late a month ago already.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 12:41:18PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 11:10, Rene Engelhard wrote:
  pcc is barely at 98%. I don't think that barrier should be that high. We
  *should* at last release with the tree most important archs: i386, amd64,
  powerpc.
 
 Please, 98% is not high. It is just a call to porters to get their act 
 together.
 
 I don't believe that any (sane) maintainer would refuse FTBFS fixing patches 
 (see hurd-i386 and k{net,free}-bsd stuff). These criterions are just a IMHO 
 necessary step to put the load on those people who want to use the arch 
 instead of those who maintain central infrastructure.

Like the arm autobuilders for example ? Mmm, but then the arm buildd
maintainer is also our main ftp-master, right ? 

Friendly,

Sven luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 12:47:20PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 12:21, Ingo Juergensmann wrote:
 [...]
  but in fact this is already a decission being
  made by just a handful of people without asking those who will be affected
  by that decision.
 
 I always thought those who do the work, also get to make the decisions.

Not really, unless you want to fork the whole debian infrastructure that is.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 01:02:34PM +0100, David Schmitt wrote:
 On Monday 14 March 2005 11:00, Sven Luther wrote:
  On Mon, Mar 14, 2005 at 01:14:47AM -0800, Steve Langasek wrote:
   There are a few problems with trying to run testing for architectures
   that aren't being kept in sync.  First, if they're not being kept in
   sync, it increases the number of matching source packages that we need
   to keep around (which, as has been discussed, is already a problem);
   second, if you want to update using the testing scripts, you either have
   to run a separate copy of britney for each arch (time consuming,
   resource-intensive) or continue processing it as part of the main
   britney run (we already tread the line in terms of how many archs
   britney can handle, and using a single britney check for archs that
   aren't keeping up doesn't give very good results); and third, if
   failures on non-release archs are not release-critical bugs (which
   they're not), you don't have any sane measure of bugginess for britney
   to use in deciding which packages to keep out.
 
  What about building the scc (or tier 2 as i would say) arches from testing
  and not unstable ? this way you would have the main benefit of testing (no
  RC bugs, no breakage of the day kind of problems).
 
 I'm only guessing: because keeping those archs in testing didn't work out and 
 is (broadly) the cause dropping them in the first place?

No, you didn't understand. let's tell the plan again : 

  1) people upload to unstable always. Only source are considered, and people
  not having tested them and upload unbuildable sources are utherly flamed for
  their lack of discern :).

  2) the autobuilder build those packages for unstable for the tier 1 arches.

  3) after some time, the packages are moved to testig, as done by the testing
  script for the tier 1 arches.

  4) the tier 2 arches build their stuff from testing. there are two results
  of this :

4.1) the package builds without problem, it is added to the tier 2
archive.

4.2) the package fails to build. This used to be a RC critical FTBFS, but
is not so anymore. The porter are responsible for fixing the bug and
uploading a fixed package to unstable, as they do now. 

  4.2.1) the unstable built package passes testing rather quickly, and is
  then rebuild for the tier 2 arches, back to 4).

  4.2.2) the unstable built package is held out of testing for whatever
  not tier2 arch relevant issue. They can then be built in an
  arch-specific way, and uploaded directly to the arch in question, or
  maybe through a arch-specific-mini-testing-script.

This would have the benefit of :

  - Not having slower arches hold up testing. 
  - not overloading the testing scripts.
  - allow the tier 2 arches to have the benefit of testing, that is an archive
with packages suffering from RC bugs and breakage-of-the-day, as if they
build from unstable.
  - diminish the workload for the tier 2 autobuilders, since they only have to
build proven good packages, and not random stuff going in unstable.
  - still allow the tier 2 arches to be part of debian, and hope for a sarge
release, which yields to :

  5) Once a stable release is done, the above can be repeated by the tier 2
  arches, until they obtain the release quality and maybe be part of a future
  stable point release.

Now, given this full description, does my proposal seem more reasonable ? 

   For these reasons, I think the snapshotting approach is a better option,
   because it puts the package selection choices directly in the hands of
   the porters rather than trying to munge the existing testing scripts
   into something that will make reasonable package selections for you.
 
  So, why don't you do snapshoting for testing ? Do you not think handling
  all those thousands of packages manually without the automated testing
  thinhy would be not an over-burden for those guys ?
 
 Obviously britney/dak is available from cvs.d.o and meanwhile also as debian 
 package. So the question for me (administrating two sparc boxes) is why _we_ 
 don't setup our own testing when obviously the ftp-masters and core release 
 masters are not willing to do the work for us? 

I guess this is also the message i get from them. The same happens for NEW
processing, and the solution is to setup our own unofficial archive, thus
leading to the split and maybe future fork of debian.

 My answer is that I don't care enough for tow out of 15 boxes for the hassle, 
 I will update them to sarge, be grateful for the gracetime given and - iff 
 nobody steps up to do the necessary porting and security work - donate them 
 to Debian when etchs release leaves my current nameserver without security 
 updates.
 
 What would you say, if I asked you to provide security support for sparc 
 because _I_ need it for my nameservers? 

There was no comment from the security team about this new plan, we don't know

Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 02:09:54PM +0100, Kevin B. McCarty wrote:
 David Schmitt wrote:
 
  I cannot remember[0] a question to the candidates regarding 
  architecture-dropping. The only question pertaining the release[1], was 
  only 
  answered by Matthew Garret, saying that it would be helpful if (in future) 
  the release team would communicate their list of release criteria well in 
  advance of their estimated time of release. Which obviously happened now.
  
  Please point me to the posts, so I can add it to my page[2]
 
 I believe Sven was referring to the Linux Weekly News interview with the
 DPL candidates.  It can be found here:

Yep, probably. I believe it should have been posted to debian-vote too though.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 06:23:50AM -0500, Joey Hess wrote:
 Sven Luther wrote:
  Yes, it asked one question during the install, wasn't it ? One potentially
  confusing question to the poor user.
 
 That's almost as innacurate as your earlier statement that
 popularity-contest was dropped from d-i for rc3 (it was dropped before
 rc1). Please refer to [EMAIL PROTECTED] for the facts of
 the matter.

Ok, i stand corrected.

Friendly,

Sven LUther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Edge and multi-arch (was Re: Bits (Nybbles?) from the Vancouver release team meeting)

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 12:18:33PM +, Martin Michlmayr wrote:
 * Tollef Fog Heen [EMAIL PROTECTED] [2005-03-14 13:10]:
  | I have yet to see a proposal how to do multiarch in the right way.
  What is lacking in the proposals out there?
 
 The following is what I (as DPL) sent to the release people in January
 to get them to discuss these issues.  I didn't post this to a list
 because what I wrote is kinda rough and I wanted the release people to
 clarify and post it.  Since this hasn't happened yet, I might just as
 well post my original message.  But please note that some important
 things might be missing in it.
 
 
 Basically, there has been a lot of discussions about multi-arch and
 some people seem to think that after sarge we'll _obviously_ move to
 multi-arch.  Well, this is not so obvious to me.  In particular, I see
 no consensus among ftpmaster/archive people, release people, toolchain
 people, porters, and basically everyone else that this is the way to

Well, there is no clear consensus about what debian is and should be in the
future among these people to start with, so ...

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 12:36:45PM +0100, Roland Mas wrote:
 Sven Luther, 2005-03-14 10:50:13 +0100 :
 
  I don't see how having the in-devel arches be hosted on alioth
  instead on the official debian ftp server would cause a problem.
 
 The amd64 archive on Alioth has been (and still is) the major cause of
 many many problems.  In a few words: it eats space, which disrupts all
 Alioth projects using CVS (for a start).

Err, i meant the other way around. Obviously the aim is to have the tier 2
arches dropped from the main ftp-servers of debian (do we still run some of
those on sun-donated sparc machines though ?), and going into alternate
solutions like the amd64 move on alioth or whatever, which i think is a broken
concept.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 01:06:05PM +0100, Ingo Juergensmann wrote:
 On Mon, Mar 14, 2005 at 12:25:13PM +0100, Andreas Tille wrote:
 
  Sorry for using stupid, braindead and others. But there are no other
  words for crap like this, imho.
  Hmm, while I'm in principle share your point of keeping the architectures
  it does not sound very sane to be that harsh.  If a group of volunteers
  faces the situation that they are not able to continue the work they did
  with the expected quality, they have to face real world and draw a
  decision.  Normally everything what needed to be done was done and thus
  if enough people stand up and care for fitting the criteria (98% compiled
  packages, security for the arch, supporting the kernel).  If there are
  no such people who do the work talking about stupid decisions makes
  no sense.
 
 There were offers of help in man power and machines for archs that had
 problems in keeping up. Those were rejected. Punishing those archs for the
 mistakes of those buildd admins rejecting helping hands is just plain
 stupid. The user is the one who will suffer from that decision. 

Notice that one of the main arch having problem some time back was arm, and
the buildd where maintained by who ? elmo.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Bits (Nybbles?) from the Vancouver release team meeting

2005-03-14 Thread Sven Luther
On Mon, Mar 14, 2005 at 11:00:22PM +1100, Hamish Moffatt wrote:
 On Mon, Mar 14, 2005 at 10:10:32AM +0100, Ingo Juergensmann wrote:
  All the work and support over all those years by all those users and porters
  will be vanished with that stupid idea, imho. 
 
 Ingo, obviously you are pissed off. But really, is there much benefit in
 making *releases* for the SCC architectures? 
 
 The packages will still be built and d-i maintained as long as there are
 porters interested in doing that work for the architecture. The only
 difference is that those architectures won't influence testing and they
 won't be officially released.

So, there are no stable release to be running and be sure you have no
problems, and no testing to be sure some random developer who doesn't think
past x86 break your upgrade on a random basis.

Friendly,

Sven Luther


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



<    1   2   3   4   5   6   7   8   9   10   >