Re: Ubuntu discussion at planet.debian.org

2004-10-29 Thread Matt Zimmerman
On Sun, Oct 24, 2004 at 05:42:23PM -0600, Marcelo E. Magallon wrote:

  Testing is by design all-or-nothing.  As long as a single architecture
  hasn't buildd support for t-p-u, the buildd support for t-p-u is as
  good as missing.

This isn't by design, it's simply the policy which is currently in use.

-- 
 - mdz




Re: Ubuntu discussion at planet.debian.org

2004-10-27 Thread Anthony Towns
On Wed, Oct 27, 2004 at 03:12:40AM +0200, Javier Fern?ndez-Sanguino Pe?a wrote:
 Wouldn't those numbers be something that popularity-contest could 
 produce? Maybe that grants a wishlist bug

Dunno; I tend to think self identification of distro is probably
better than trying to automatically guess it -- I've had unstable in my
sources.list for ages, with pinning to stick with testing, eg.

But hey, it's easier to ignore bad numbers that've been generated, than to
make use of good numbers that haven't.

Cheers,
aj

-- 
Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/
Don't assume I speak for anyone but myself. GPG signed mail preferred.

``[S]exual orgies eliminate social tensions and ought to be encouraged.''
  -- US Supreme Court Justice Antonin Scalia (http://tinyurl.com/3kwod)


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-26 Thread Andreas Barth
* Marcelo E. Magallon ([EMAIL PROTECTED]) [041026 09:35]:
 On Sun, Oct 24, 2004 at 01:37:21AM -0500, Manoj Srivastava wrote:
Okay, it's a month old, but there hasn't been any since.
http://lists.debian.org/debian-devel-announce/2004/09/msg5.html
We are also still missing official autobuilders for
testing-proposed-updates on alpha and mips.  All other architectures
appear to be keeping up with t-p-u uploads.

  Missing a buildd on an arch or too is a far cry from t-p-u
being unsupported.

  Testing is by design all-or-nothing.  As long as a single architecture
  hasn't buildd support for t-p-u, the buildd support for t-p-u is as
  good as missing.  You could do builds by hand, but then again, how many
  developers actually do that?

And, please don't do it. We definitly can't release without official
buildds for all architectures, so please no binary-only uploads in t-p-u
(if the release team would consider different, I'd have started building
mips and alpha there long ago).


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Ubuntu discussion at planet.debian.org

2004-10-26 Thread Rob Browning
Hamish Moffatt [EMAIL PROTECTED] writes:

 Shrug I haven't seen much need here. It's usually possible to track
 down earlier package versions if I really need through, from Debian, or
 snapshot.debian.net, or out of date mirrors (:)).

Well it was handy to have my originals here when gnu.org was
compromised (for example).

-- 
Rob Browning
rlb @defaultvalue.org and @debian.org; previously @cs.utexas.edu
GPG starting 2002-11-03 = 14DD 432F AE39 534D B592  F9A0 25C8 D377 8C7E 73A4




Re: Ubuntu discussion at planet.debian.org

2004-10-26 Thread Javier Fernández-Sanguino Peña
On Mon, Oct 25, 2004 at 01:36:16AM +1000, Anthony Towns wrote:
 On Sat, Oct 23, 2004 at 06:48:31AM +0200, J?r?me Marant wrote:
  There are package that never enter testing and nobody notice because
  everyone use unstable (sometimes because of buggy dependencies).
 
 This isn't true: http://www.debian-administration.org/?poll=3
 
 Sure, it's a tiny enough sample that the 32% probably isn't terribly
 reliable; but heck, even one person is enough to refute a claim of
 everyone, and there are 59 people noted there.

Wouldn't those numbers be something that popularity-contest could 
produce? Maybe that grants a wishlist bug


forget it: #255000 (now somebody needs to provide a patch that suits the 
maintainer...)

Regards

Javier


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-25 Thread Andres Salomon
On Sun, 24 Oct 2004 09:28:20 +0200, Matthias Urlichs wrote:

 Hi, Henrique de Moraes Holschuh wrote:
 
 Is there really a developer out there that doesn't do even the most
 rudimentary VC by keeping copies of all the source packages he has
 uploaded/worked on ?
 
 What for? You can always get your old versions from snapshots.debian.net.
 

*smirk*.  I'm reminded of a quote...

Only wimps use tape backup: _real_ men just upload their important stuff 
on ftp, and let the rest of the world mirror it
-- Linus




Re: Ubuntu discussion at planet.debian.org

2004-10-25 Thread Wouter Verhelst,,+32 15 27 69 50,+32 3 542 35 14,
On Sun, Oct 24, 2004 at 12:44:35AM -0300, Henrique de Moraes Holschuh wrote:
 On Sun, 24 Oct 2004, Matthew Palmer wrote:
  On Sat, Oct 23, 2004 at 02:40:24PM -0500, Manoj Srivastava wrote:
   On Sat, 23 Oct 2004 11:54:05 +0200, J?r?me Marant [EMAIL PROTECTED] 
   said: 
Manoj Srivastava [EMAIL PROTECTED] writes:
Okay, that's what t-p-u is roughly for, but the fact is that it's
quite painful.

Could you elaborate on that? Why is it so painful?
   
Probably because you need maintain packages for both unstable and
testing at the same time.
   
 That is a simple branching issue in the version control
system, no?
  
  A huge rush of air fills the list as hundreds of developers fill their
  lungs to collectively say I don't use version control...
 
 Is there really a developer out there that doesn't do even the most
 rudimentary VC by keeping copies of all the source packages he has
 uploaded/worked on ?

I never did that. Even though disk space is cheap these days, I always
manage to run out of it, and tend to remove stuff I do not think I will
still need (such as, old and already uploaded/installed packages).

-- 
 EARTH
 smog  |   bricks
 AIR  --  mud  -- FIRE
soda water |   tequila
 WATER
 -- with thanks to fortune


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-25 Thread Michael K. Edwards
 Steve Langasek

 It is not correct.  At the time testing freezes for sarge, there are likely
 to be many packages in unstable which either have no version in testing, or
 have older versions in testing.  The list of such packages is always visible
 at http://ftp-master.debian.org/testing/update_excuses.html.gz.  While
 it's a goal of the release team to ensure that *incidental* issues don't
 keep package fixes/updates out of testing, there are plenty of package
 updates which will come too late for consideration, or will be RC-buggy in
 their own right, that won't be part of sarge.

That's the URL I was trying to remember; thanks.  That's what I meant
by the interesting thing about testing is the dependency analysis. 
I think the information in update_excuses mostly supports the
convergence is readiness hypothesis.

It seems to me that Jérôme's observation also takes into account the
fact that experimental exists, so that changes that maintainers know
would break britney don't get put into unstable late in the cycle. 
Without that, I wouldn't expect testing - unstable convergence ever
to happen.  But don't you think that, until testing converges (nearly)
to unstable, it's hard to know how much of testing will FTBFS on
testing itself?

Although it does sometimes happen that an update breaks something that
works in the version in testing, I think it's more common for an RC
bug to apply to earlier versions as well, even when it's an FTBFS for
something that used to build.  (That often seems to mean that one of
the build-deps evolved out from under the package or got removed
because it was old or broken, and the source that's made it into
testing won't build there either.)  So I would expect that the vast
majority of RC bugs filed against packages in sid have to be handled
by really fixing them -- and letting the fix propagate into testing --
or excluding the package from sarge.

Freezing base+standard at this stage saves the package maintainers the
trouble of uploading to experimental instead of unstable for a while,
and makes it a lot easier for the RMs to allow fixes in selectively. 
Otherwise, progressive freezes don't really alter this analysis.

 And immediately *after* the freeze point, I think we can safely expect
 unstable to begin diverging even further from testing.

True enough.  In a lot of commercial software development, the
interval between code freeze / VC branch and release is necessary so
that QA can finally do a full run through the test plan and the senior
coders are free to fix any RC bug they can.  Everybody else works on
the trunk.  So apply the testing (almost) = unstable criterion to
the freeze point rather than the release point, with the understanding
that the packages for which it's not true are exactly the ones that
need more / different attention during the freeze than they were
getting before.

 While getting KDE updates into testing has been a significant task in the
 past, I'm not aware of any point during the sarge release cycle when KDE has
 been a factor delaying the release.

Er, does the current situation fit?  An awful lot of update_excuses
seems to boil down to Bug#266478, and it's hard to see the RC bug
count on KDE 3.2 apps dropping by much until the debate about letting
KDE 3.3 in is resolved.  I think the C++ toolchain issues I mentioned
were a factor in KDE 3.2 propagation into testing being delayed to the
point that KDE 3.3 is even worth discussing.  But I haven't been
following those issues at all lately, so don't take my opinion on this
too seriously; maybe I should just ignore that portion of
update_excuses.

Cheers,
- Michael




Re: Ubuntu discussion at planet.debian.org

2004-10-25 Thread Jonas Meurer
On 24/10/2004 Mike Hommey wrote:
 If people test unstable, then it's unstable we should release, not
 testing. As somebody said in this thread not enough people are trying
 testing, and that's one of our problems in the release cycle.

just to say that, i know of many debian users (me included) that run
unstable on their development machine, but testing on inofficial,
nonrelevant or in any way non-high-availabilty servers.

and the problem with freezing unstable is, that after release unstable
will become rather unusable, as not all developers will stop with
development of new and significant changes in their software.

bye 
 jonas




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Manoj Srivastava
On Sun, 24 Oct 2004 09:46:32 +1000, Matthew Palmer [EMAIL PROTECTED] said: 

 On Sat, Oct 23, 2004 at 02:40:24PM -0500, Manoj Srivastava wrote:
 On Sat, 23 Oct 2004 11:54:05 +0200, J?r?me Marant
 [EMAIL PROTECTED] said:
  Manoj Srivastava [EMAIL PROTECTED] writes:
  Okay, that's what t-p-u is roughly for, but the fact is that
  it's quite painful.
  
  Could you elaborate on that? Why is it so painful?
 
  Probably because you need maintain packages for both unstable and
  testing at the same time.
 
 That is a simple branching issue in the version control system, no?

 A huge rush of air fills the list as hundreds of developers fill
 their lungs to collectively say I don't use version control...

Really? Good good, I would expect developers to adhere to this
 most basic of recommended software practice.

manoj
-- 
Obviously I was either onto something, or on something. Larry Wall on
the creation of Perl
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Manoj Srivastava
On Sun, 24 Oct 2004 11:27:41 +0900, Mike Hommey [EMAIL PROTECTED] said: 

 On Sat, Oct 23, 2004 at 12:14:45PM -0500, Manoj Srivastava wrote:
 On Sat, 23 Oct 2004 14:23:48 +0900, Mike Hommey [EMAIL PROTECTED]
 said:
 
  And why not, instead of freezing unstable, make it build against
  testing, when er try to freeze testing ?
 
 Libraries. If you build against a library version that is no longer
 in unstable, then you may have issues in testing when a new library
 tries to migrate into testing -- cause nowhere would there be
 packages built against the new library version.

 I don't see the point. If you build against what is in testing,
 there's no issue when migrating to testing.  One particular issue

And you wouldn't ever be able to run unstable, so what's the
 point of having it if people don't test unstable?

 would be when libraries change ABI, and new packages would need to
 be built against them, but still, at that particular time, the
 purpose being mainly to freeze testing, these ABI changes should be
 candidates for experimental.

In other words, stop all development dead, since experimental
 is never ever used as a default ditribution by anyone sane.


 Not to mention that unstable would become unviable as a
 distribution -- the run time libs may not be the ones that are
 needed by the packages in unstable.

 At that particular time, isn't frozen-testing the one that is
 supposed to be a distribution ?

If unstable is not a distribution, what the hell is the point
 of having all the paraphernalia of unstable around?  The whole point
 of uploading to unstable is to have people test packages in
 unstable. 

 On top of the problems mentionned by the other replies, the fact
 that autobuilders have to be set up for t-p-u... can you remind me
 how long sarge has been planned for freeze ? and for how long
 autobuilders are required for alpha and mips for t-p-u ?

This is incorrect, t-p-u is indeed supported by buildds --
 though this paragraph seems to be more like a rant than anything
 else.

manoj
-- 
Psychiatry is the care of the id by the odd.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Mike Hommey
On Sun, Oct 24, 2004 at 12:11:51AM -0500, Manoj Srivastava wrote:
[...]
   If unstable is not a distribution, what the hell is the point
  of having all the paraphernalia of unstable around?  The whole point
  of uploading to unstable is to have people test packages in
  unstable. 

If people test unstable, then it's unstable we should release, not
testing. As somebody said in this thread not enough people are trying
testing, and that's one of our problems in the release cycle.

[...backwards a bit...]
   In other words, stop all development dead, since experimental
  is never ever used as a default ditribution by anyone sane.

Stop all development ? See the situation for gnome 2.8. It is in
experimental. It is compiled for several architectures, and is maybe
soon ready to be put in unstable. Do you really call that stopping all
development ?

   This is incorrect, t-p-u is indeed supported by buildds --
  though this paragraph seems to be more like a rant than anything
  else.

Okay, it's a month old, but there hasn't been any since.
http://lists.debian.org/debian-devel-announce/2004/09/msg5.html
We are also still missing official autobuilders for
testing-proposed-updates on alpha and mips.  All other architectures
appear to be keeping up with t-p-u uploads.

And vorlon told me not so long ago that it was still the case, and that
it was the reason why the NMU by Frank Lichtenheld for kxmleditor[1]
through t-p-u still hasn't made it to sarge... and you may know that all
KDE applications updates have to go through t-p-u, since unstable is
polluted with KDE 3.3 which won't make it for sarge.

Take it as a rant if you want, but I'm just noticing.

Mike

1. http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=265680msg=35




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Martin Schulze
Mike Hommey wrote:
 On Sat, Oct 23, 2004 at 12:14:45PM -0500, Manoj Srivastava wrote:
  On Sat, 23 Oct 2004 14:23:48 +0900, Mike Hommey [EMAIL PROTECTED] said: 
  
   And why not, instead of freezing unstable, make it build against
   testing, when er try to freeze testing ?
  
  Libraries. If you build against a library version that is no
   longer in unstable, then you may have issues in testing when a new
   library tries to migrate into testing -- cause nowhere would there be
   packages built against the new library version.
 
 I don't see the point. If you build against what is in testing, there's
 no issue when migrating to testing.

Maybe you forgot that in such cases testing becomes what unstable is:
unstable.  You're likely to be unable to install software because
dependencies cannot be fulfilled yet by the architecture you're
running.  You've exactly won what?

 One particular issue would be when libraries change ABI, and new
 packages would need to be built against them, but still, at that
 particular time, the purpose being mainly to freeze testing, these 
 ABI changes should be candidates for experimental.

Err...  experimental ABI changes are for experimental.  Confirmed ABI
and API changes are for unstable (or whatever you want to call the
development branch).  We must not hide those changes from the future
stable distribution since it was done and confirmed upstream.

Regards,

Joey

-- 
MIME - broken solution for a broken design.  -- Ralf Baechle

Please always Cc to me when replying to me on the lists.




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Martin Schulze
Henrique de Moraes Holschuh wrote:
 Is there really a developer out there that doesn't do even the most
 rudimentary VC by keeping copies of all the source packages he has
 uploaded/worked on ?

FWIW: I've heard so...

Regards,

Joey

-- 
MIME - broken solution for a broken design.  -- Ralf Baechle

Please always Cc to me when replying to me on the lists.




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Mike Hommey
On Sun, Oct 24, 2004 at 07:53:27AM +0200, Martin Schulze wrote:
 Err...  experimental ABI changes are for experimental.  Confirmed ABI
 and API changes are for unstable (or whatever you want to call the
 development branch).  We must not hide those changes from the future
 stable distribution since it was done and confirmed upstream.

Are you saying you would go with a (for instance) gcc ABI change right
now (i.e. while trying to release) in unstable if there was one ?

Mike




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Manoj Srivastava
On Sun, 24 Oct 2004 14:52:17 +0900, Mike Hommey [EMAIL PROTECTED] said: 

 On Sun, Oct 24, 2004 at 12:11:51AM -0500, Manoj Srivastava wrote:
 [...]
 If unstable is not a distribution, what the hell is the point of
 having all the paraphernalia of unstable around?  The whole point
 of uploading to unstable is to have people test packages in
 unstable.

 If people test unstable, then it's unstable we should release, not
 testing. As somebody said in this thread not enough people are
 trying testing, and that's one of our problems in the release cycle.

World is not binary. As it is, we have people testing both
 unstable and Sarge, giving us two levels at which bugs may be caught
 and fixed. And the only numbers I have seen quoted about usage seem
 to indicate that testing is indeed being run by a whole slew of
 people.

 [...backwards a bit...]
 In other words, stop all development dead, since experimental is
 never ever used as a default ditribution by anyone sane.

 Stop all development ? See the situation for gnome 2.8. It is in
 experimental. It is compiled for several architectures, and is maybe
 soon ready to be put in unstable. Do you really call that stopping
 all development ?

Anecdotal evidence is not the singular for data. I am speaking
 about past experience, where yes, by and large, development was
 indeed stopped.  Obviously, there are exceptions to any rule.

 This is incorrect, t-p-u is indeed supported by buildds -- though
 this paragraph seems to be more like a rant than anything else.

 Okay, it's a month old, but there hasn't been any since.
 http://lists.debian.org/debian-devel-announce/2004/09/msg5.html
 We are also still missing official autobuilders for
 testing-proposed-updates on alpha and mips.  All other architectures
 appear to be keeping up with t-p-u uploads.

Missing a buildd on an arch or too is a far cry from t-p-u
 being unsupported.

 Take it as a rant if you want, but I'm just noticing.

Frankly, I am not seeing this as a big pain in the butt. It is
 a deficiency in support for some of the supported architectures, yes.

manoj
-- 
Do you think your mother and I should have lived comfortably so long
together if ever we had been married?
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Hamish Moffatt
On Sat, Oct 23, 2004 at 11:08:17PM -0500, Manoj Srivastava wrote:
 On Sun, 24 Oct 2004 09:46:32 +1000, Matthew Palmer [EMAIL PROTECTED] said: 
  A huge rush of air fills the list as hundreds of developers fill
  their lungs to collectively say I don't use version control...
 
   Really? Good good, I would expect developers to adhere to this
  most basic of recommended software practice.

Shrug I haven't seen much need here. It's usually possible to track
down earlier package versions if I really need through, from Debian, or
snapshot.debian.net, or out of date mirrors (:)).

Hamish
-- 
Hamish Moffatt VK3SB [EMAIL PROTECTED] [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Matthias Urlichs
Hi, Henrique de Moraes Holschuh wrote:

 Is there really a developer out there that doesn't do even the most
 rudimentary VC by keeping copies of all the source packages he has
 uploaded/worked on ?

What for? You can always get your old versions from snapshots.debian.net.

SCNR,
-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Jrme Marant
Matthew Palmer [EMAIL PROTECTED] writes:

  That is a simple branching issue in the version control
  system, no?

 A huge rush of air fills the list as hundreds of developers fill their
 lungs to collectively say I don't use version control...

AFAIK, it has nothing to do with VC.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Michael K. Edwards
On Sat, 23 Oct 2004 01:04:41 +0200, Jérôme Marant [EMAIL PROTECTED] wrote:
 As soon as testing is strictly equal to unstable regarding package
 versions, testing is roughly ready for release.

I think this observation is acute -- as applied to the _current_
testing mechanism.

Personally, I view testing as a QA system for the release process,
not a sensible distribution for anyone (developer or end user) to be
running on a real system.  My understanding of the mechanism by
which packages propagate into testing is that there's only one
interesting thing about it: the _reason_ why any given package fails
to propagate.  The automated dependency analysis takes some of the
personality conflicts out of the assessment of the release status, and
provides macroscopic milestones (systematic transition to library X,
version upgrade to desktop suite Y) during a certain phase of the
release cycle.

I am in the interesting position of serving as buildmaster for an
appliance incorporating a snapshot of a subset of Debian unstable. 
(I may perhaps deserve some flamage for not keeping up communication
with the Debian project while working on this, more or less in
isolation.  Allow me to plead that the pressure of circumstances has
been rather intense, and I am hoping that recent management changes
will result in more follow-through on promises of return contributions
of time and other resources.)

Perhaps I've just been lucky, but I haven't had any technical trouble
at all due to the choice of unstable.  The only issue I encountered
was an upstream-induced regression in MySQL 4.0.21, which would have
hit me anyway (we bought MySQL's commercial support contract, but I
have no desire to ship any bits that haven't been through the hands of
the Debian packager, who seems to be on top of the situation). 
snapshot.debian.net was a real lifesaver on this one, allowing me to
choose the particular historical version I wanted for the affected
packages.

When sarge releases, I'm going to want to be able to sync the boxes in
the field up to sarge so that they can participate in the stable
security maintenance process.  In the best of all possible worlds, I'd
have some guarantee that sarge will contain package versions no lower
than today's unstable, at least for the packages I'm bundling.  But I
don't think it's at all reasonable to expect that kind of a guarantee,
and I'm just going to have to do my own QA on the upgrade/downgrade
process from the snapshot I've chosen to the eventual golden sarge.

If Jérôme's observation is correct, then I don't need to worry;
unstable will converge to a consistent state under the watchful eyes
of the RM (and many others), testing will rise to meet it, and the
worst that might happen is that some of the packages I've chosen could
be excluded from sarge because of a quality problem or an ill-timed
maintainer absence.  This would be an inconvenience but hardly grounds
for complaint about Debian's release process.

In this light (and for my purposes), the only sensible place to branch
stable off from unstable is at a point where the major subsystems are
all going to be reasonably maintainable on the branch.  Perhaps we're
close to such a point now and just haven't been for a while, for
reasons largely beyond the Debian project's control.  (Apart from the
definition of its major subsystems, that is; note that Ubuntu
doesn't expect to be able to provide the level of support for KDE that
they plan for Gnome, and it appears to me that the effect of changes
in the C++ toolchain on KDE has been a significant factor in delaying
sarge.  Do tell me if I'm mistaken about that, but please don't flame
too hard; I'm not casting aspersions on KDE or g++/libstdc++, just
recording an impression.)

To me, the miracle is that a stable distribution is possible at all,
given human nature and the scope of the Debian archive.  The old adage
about sausage and the law goes double for software, perhaps because
it's a sort of sausage (a composite product of many, er, committed
contributors) stuffed with law (part logic and part historical
circumstance).  I have to admit that it takes a strong stomach to
watch the sausage being made and then eat it anyway, but it helps to
focus on how much better it is than one's other options in sausages. 
That's still how I feel about Debian, with good reason.

Cheers,
- Michael




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Martin Schulze
Mike Hommey wrote:
 On Sun, Oct 24, 2004 at 07:53:27AM +0200, Martin Schulze wrote:
  Err...  experimental ABI changes are for experimental.  Confirmed ABI
  and API changes are for unstable (or whatever you want to call the
  development branch).  We must not hide those changes from the future
  stable distribution since it was done and confirmed upstream.
 
 Are you saying you would go with a (for instance) gcc ABI change right
 now (i.e. while trying to release) in unstable if there was one ?

As somebody said, the world is not binary.  This also has to be decided
on a per-use case.  Hence, I cannot provide a yes/no answer to this general
question.

Regards,

Joey

-- 
MIME - broken solution for a broken design.  -- Ralf Baechle

Please always Cc to me when replying to me on the lists.




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Anthony Towns
On Sat, Oct 23, 2004 at 06:48:31AM +0200, J?r?me Marant wrote:
 There are package that never enter testing and nobody notice because
 everyone use unstable (sometimes because of buggy dependencies).

This isn't true: http://www.debian-administration.org/?poll=3

Sure, it's a tiny enough sample that the 32% probably isn't terribly
reliable; but heck, even one person is enough to refute a claim of
everyone, and there are 59 people noted there.

Can we at least avoid being quite so negligent with the truth? Like, say,
not making claims about how many people use testing in the first place,
without getting some actual numbers to back them up?

Cheers,
aj

-- 
Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/
Don't assume I speak for anyone but myself. GPG signed mail preferred.

``[S]exual orgies eliminate social tensions and ought to be encouraged.''
  -- US Supreme Court Justice Antonin Scalia (http://tinyurl.com/3kwod)


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Steve Langasek
On Sun, Oct 24, 2004 at 01:37:21AM -0500, Manoj Srivastava wrote:
  This is incorrect, t-p-u is indeed supported by buildds -- though
  this paragraph seems to be more like a rant than anything else.

  Okay, it's a month old, but there hasn't been any since.
  http://lists.debian.org/debian-devel-announce/2004/09/msg5.html
  We are also still missing official autobuilders for
  testing-proposed-updates on alpha and mips.  All other architectures
  appear to be keeping up with t-p-u uploads.

   Missing a buildd on an arch or too is a far cry from t-p-u
  being unsupported.

Unfortunately, nothing in t-p-u can be safely accepted into testing until
it's been built for all relevant architectures.  While having 9 out of 11
architectures building t-p-u is better than still needing all 11 archs to be
set up for it, the practical, visible impact *today* on testing is the same;
it just means that the tomorrow when we can use t-p-u for its intended
purpose is likely a little closer.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Steve Langasek
On Sun, Oct 24, 2004 at 03:48:04AM -0700, Michael K. Edwards wrote:
 On Sat, 23 Oct 2004 01:04:41 +0200, Jérôme Marant [EMAIL PROTECTED] wrote:
  As soon as testing is strictly equal to unstable regarding package
  versions, testing is roughly ready for release.

 If Jérôme's observation is correct, then I don't need to worry;
 unstable will converge to a consistent state under the watchful eyes
 of the RM (and many others), testing will rise to meet it, and the
 worst that might happen is that some of the packages I've chosen could
 be excluded from sarge because of a quality problem or an ill-timed
 maintainer absence.

It is not correct.  At the time testing freezes for sarge, there are likely
to be many packages in unstable which either have no version in testing, or
have older versions in testing.  The list of such packages is always visible
at http://ftp-master.debian.org/testing/update_excuses.html.gz.  While
it's a goal of the release team to ensure that *incidental* issues don't
keep package fixes/updates out of testing, there are plenty of package
updates which will come too late for consideration, or will be RC-buggy in
their own right, that won't be part of sarge.

And immediately *after* the freeze point, I think we can safely expect
unstable to begin diverging even further from testing.

 In this light (and for my purposes), the only sensible place to branch
 stable off from unstable is at a point where the major subsystems are
 all going to be reasonably maintainable on the branch.  Perhaps we're
 close to such a point now and just haven't been for a while, for
 reasons largely beyond the Debian project's control.  (Apart from the
 definition of its major subsystems, that is; note that Ubuntu
 doesn't expect to be able to provide the level of support for KDE that
 they plan for Gnome, and it appears to me that the effect of changes
 in the C++ toolchain on KDE has been a significant factor in delaying
 sarge.  Do tell me if I'm mistaken about that, but please don't flame
 too hard; I'm not casting aspersions on KDE or g++/libstdc++, just
 recording an impression.)

While getting KDE updates into testing has been a significant task in the
past, I'm not aware of any point during the sarge release cycle when KDE has
been a factor delaying the release.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Marcelo E. Magallon
On Sun, Oct 24, 2004 at 01:37:21AM -0500, Manoj Srivastava wrote:

   Okay, it's a month old, but there hasn't been any since.
   http://lists.debian.org/debian-devel-announce/2004/09/msg5.html
   We are also still missing official autobuilders for
   testing-proposed-updates on alpha and mips.  All other architectures
   appear to be keeping up with t-p-u uploads.
  
   Missing a buildd on an arch or too is a far cry from t-p-u
   being unsupported.

 Testing is by design all-or-nothing.  As long as a single architecture
 hasn't buildd support for t-p-u, the buildd support for t-p-u is as
 good as missing.  You could do builds by hand, but then again, how many
 developers actually do that?  And it only takes a mail to the admin
 team (please install build dependencies for foo in bar).

 Marcelo




Re: Ubuntu discussion at planet.debian.org

2004-10-24 Thread Andrew Pollock
On Fri, Oct 22, 2004 at 12:25:48PM +0200, Jérôme Marant wrote:
 Antti-Juhani Kaijanaho [EMAIL PROTECTED] writes:
 
  On 20041022T134825+0200, Jérôme Marant wrote:
  Before testing, the RM used to freeze unstable and people were
  working on fixing bugs. There were pretest cycles with bug horizons,
  and freezes were shorter.
 
  That's not true (unless you are talking about something that was ceased
  several years before testing became live, certainly before I started
  following Debian development in 1998).  Before testing the RM used to
  fork unstable into a frozen distribution.  Unstable was still open for
  development, and heated arguments developed on this very list asking
  that the process be changed so that unstable would be frozen; this was
  never done.
 
  I don't know what you mean by pretest cycles with bug horizons.
 
 
 You are correct. It seems so old to me that I didn't even recall
 it was a fork. This indeed explains why that process had to
 be improved. It also explains why the current process needs to
 be improved as well.
 
 Thanks to Ubuntu, we now have a good example of what's proven
 to work.
 

I think it is premature to declare that Ubuntu's model works any better than
what we're currently doing, in the long run.

regards

Andrew

-- 
linux.conf.au 2005   -  http://lca2005.linux.org.au/  -  Birthplace of Tux
April 18th to 23rd   -  http://lca2005.linux.org.au/  -   LINUX
Canberra, Australia  -  http://lca2005.linux.org.au/  -Get bitten!




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Mike Hommey
On Fri, Oct 22, 2004 at 10:28:29PM -0500, Manoj Srivastava wrote:
   This is a fallacy.  In the past, when we did freeze unstable,
  it never forced me to do anything but twidle my thumbs for months
  until things got moving again. The reason that freezing unstable did
  not make me fix any more bugs, since the bugs were not in packages I
  was in any way an expert in.
 
   Freezes just used to be a frustrating, prolonged period in
  which I did no Debian work at all, waiting for unstable to thaw back
  out.

And why not, instead of freezing unstable, make it build against
testing, when er try to freeze testing ? Okay, that's what t-p-u is
roughly for, but the fact is that it's quite painful.

Mike




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Fri, 22 Oct 2004 19:57:15 +0200, Eduard Bloch [EMAIL PROTECTED] said: 

 include hallo.h
 * Romain Francoise [Fri, Oct 22 2004, 06:04:12PM]:

  Is the entire world on crack and I just failed to notice until
  now?
 
 Don't worry, we're preparing an internal General Resolution to
 address this crack problem, but you're not supposed to know about
 it.  This is how we fix problems in Debian: hide them, then propose
 General Resolutions.

 And your point is..?

That a GR on technical issues is moronic?

 It is our right to hide things. We do not hide problems, we hide
 possible solutions. The problem is well known, but there are

This is even stupider than I thought possible.

 different ways to solve it. And before you think about writing
 another message, think about the reason for having the
 debian-private ML.

It certainly is not to have moronic conversations like
 this. We should certainly not be hiding stupidity in Debian ranks.

manoj
-- 
Lay on, MacDuff, and curs'd be him who first cries, Hold,
enough!. Shakespeare
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Fri, 22 Oct 2004 11:36:13 +0200, Eduard Bloch [EMAIL PROTECTED] said: 

 include hallo.h
 * Jérôme Marant [Fri, Oct 22 2004, 10:20:51AM]:

 Some improvements have already been proposed by Eduard Bloch and
 Adrian Bunk: freezing unstable while keeping testing.

 Jerome, please, you could have asked me. I prepare an internal GR
 draft for exactly this issue, but it is to be made public on the day
 of the release, and better not before. We should concentrate on
 making the Sarge release ready, NOW. Do not start another flamewar.

A ^%$#^ GR? to decide on technical issues like release
 management?  This is incredibly stupid.  We used to never decide
 technical issues by popular opinion -- and, anyway, a GR on release
 policy is a no-op. The proper way to go about changing the release
 mechanism has already been demonstrated by AJ -- he went off and
 implemented testing on his own, shadowing the real archive, wrote up
 the testing scripts, and came back with numbers, and proof of
 concept, not a meaningless vote.

You know, all this politicking, as opposed to writing code, is
 probably the prime factor behind any decline in the quality of the
 distribution. 

manoj
-- 
Q: What's the difference betweeen USL and the Graf Zeppelin? A: The
Graf Zeppelin represented cutting edge technology for its time.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:


 What do you think we'd get by combining both (testing + unstable
 freeze)?

   If you freeze unstable anyway, you are blocking the updates --
  and thus have all the problems of this style of interrupted
  development. If unstable is frozen, what is the point of Testing?

Testing scripts are a gatekeeper against mistakes from unstable.
Upload debian-specific changes to unstable doesn't necessarily mean
there won't be side effects that shall not enter testing.

   Am I missing something in your (somewhat nebulous) proposal?

Freezing unstable prevent people from uploading new upstream releases
which desynchronizes unstable from testing and forces people to
work with two distributions (and necessarily neglect one of them).

As soon as testing is strictly equal to unstable regarding package
versions, testing is roughly ready for release.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Daniel J. Priem
Am Fr, den 22.10.2004 schrieb Eduard Bloch um 22:26:
 #include hallo.h
 * D. Starner [Fri, Oct 22 2004, 11:31:10AM]:
 Or do you really believe that mega-threads help much? Do you really
 think that Canonical/Ubuntu is more successfull because they discuss
 more and let everyone publish its 0.02$ that everybody needs to read? Do
 you really think that the explosion of redudant messages in mega-threads
 is productive?

No.

 
  That that's wrong. That GRs have been proposed way too much recently.
 
 Exactly. That is why I am not going to release a half-done paper. It is
 better to be discussed in a small circle. The GR drafts posted in the
 last months caused something I wish to avoid - fscking huge flamewars.

Full ACK.

 
   We do not hide problems, we hide
   possible solutions.
  
  And that's _so_ much better.
 
 If we get more important things done first - yes.

Yes. concentrate on the work that should be done now.

Daniel
 
 And from now, I will refuse to answer to anything posted to this
 subthread.
 
 Regards,
 Eduard.
 -- 
 stockholm Overfiend: why dont you flame him? you are good at that.
 Overfiend I have too much else to do.
 


signature.asc
Description: Dies ist ein digital signierter Nachrichtenteil


Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Sat, 23 Oct 2004 01:04:41 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Manoj Srivastava [EMAIL PROTECTED] writes:
 What do you think we'd get by combining both (testing + unstable
 freeze)?
 
 If you freeze unstable anyway, you are blocking the updates -- and
 thus have all the problems of this style of interrupted
 development. If unstable is frozen, what is the point of Testing?

 Testing scripts are a gatekeeper against mistakes from unstable.
 Upload debian-specific changes to unstable doesn't necessarily mean
 there won't be side effects that shall not enter testing.

Why not just leave freeze testing, and create an
 ultra-pending-release frozen candidate branch which is a gatekeeper
 against mistakes from testing.  Freeze testing instead.

 Am I missing something in your (somewhat nebulous) proposal?

 Freezing unstable prevent people from uploading new upstream
 releases which desynchronizes unstable from testing and forces
 people to work with two distributions (and necessarily neglect one
 of them).

How does this actually make testing become releaseable sooner,
 if testing is actually frozen? freeze testing, leave unstable alone,
 and create as many harder-frozen-ready-to-release candidate variants
 of testing you want.

See, you don't really need people in power to do this: just
 create a fake-testing somewhere, and a fake-frozen, and see if things
 actually come together sooner that way.

 As soon as testing is strictly equal to unstable regarding package
 versions, testing is roughly ready for release.

This may take forever. However, frozen-testing and
 frozen-candidate may fugue towards equivalence asymptotically.

manoj
-- 
You will have many recoverable tape errors.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Colin Watson [EMAIL PROTECTED] writes:

 On Fri, Oct 22, 2004 at 02:48:01PM +0200, Jérôme Marant wrote:
 Joey Hess [EMAIL PROTECTED] writes:
  When we used to freeze unstable before a release, one of the problems
  was that many updates were blocked by that, and once the freeze was
  over, unstable tended to become _very_ unstable, and took months to get
  back into shape.
 
 What do you think we'd get by combining both (testing + unstable freeze)?

 My guess is that the release team would go insane having to approve
 every upload to unstable.

I don't think so. Dinstall would reject any new upstream release.
Approvals would only apply to t-p-u just like it is done
currently.

 Before you say it, it's much easier to do this sort of thing in Ubuntu
 because we have a small enough team that we don't have to lock down the
 archive during freezes, but instead just say don't upload without
 approval. In Debian, we've seen many times (e.g. when trying to get
 large groups of interdependent packages into testing) that not all
 developers can be assumed to have read announcements or will agree with
 the procedure, and I think we could expect many unapproved uploads if we
 tried such an open procedure; so we'd have to lock down the archive
 using technical measures.

I agree with you. It is too bad we'd have to lock down the archive,
but you don't manage a set of 900 volonteers the same way you manage
30 payed developers, methink.

Cheers,

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Colin Watson [EMAIL PROTECTED] writes:

 Are you saying that technical choices do not contribute to the success
 of Canonical? For instance, deciding to target the distribution at
 most popular architectures only?

 In my experience as both a Canonical employee and a Debian developer,
 the number of architectures supported by Ubuntu makes a negligible
 difference to Ubuntu's ability to release.

Nonetheless, you won't deny it makes things significantly slower.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

 On Fri, 22 Oct 2004 10:20:51 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Debian developers, on the contrary, run unstable and rarely run
 testing, which means that they don't really know about the shape of
 what they release.

   The reason I run unstable is because tat is where I upload
  to -- and that is where the shared libs are that my packages use, and
  that is where I work out the bugs experienced. However, testing does
  not seem to be too far off from unstable in the packages I use a
  lot. 

There are package that never enter testing and nobody notice because
everyone use unstable (sometimes because of buggy dependencies).

 The Testing distribution helped a lot in release
 management, especially for synchronizing architectures.  Some
 improvements have already been proposed by Eduard Bloch and Adrian
 Bunk: freezing unstable while keeping testing.  Freezing unstable
 forces people to work on fixing bugs, and the quicker the bugs are
 fixed, the quicker the distribution is released and the quicker

   This is a fallacy.  In the past, when we did freeze unstable,
  it never forced me to do anything but twidle my thumbs for months
  until things got moving again. The reason that freezing unstable did
  not make me fix any more bugs, since the bugs were not in packages I
  was in any way an expert in.

   Freezes just used to be a frustrating, prolonged period in
  which I did no Debian work at all, waiting for unstable to thaw back
  out.

Because you always took properly care of your packages. It wouldn't
be necessary if everyone fixes bugs in packages ones maintain.

cheers,

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

w
 I think it would be marginal. After all, the experimental
 distribution does exit for this purpose and nonetheless, people do
 not neglect unstable.

   I do not think you understand what the experimental
  distribution is, and how it is different from unstable, if you can
  say that. (not a full distribution, contains truly volatile packages,
  not supported by buildd's, for a start).

Yes I do. Experimental is not really a distribution. It is a repository
you cherrypick packages from. And packages are usually built against
unstable packages.

 Before testing, the RM used to freeze unstable and people were
 working on fixing bugs. There were pretest cycles with bug horizons,

   Not true. People were mostly twiddling their thumbs. Only a
  small subset of people can actually help in fixing RC bugs.

Are you talking about skills?

 and freezes were shorter.  Of course, without testing,
 synchronizing arches was a pain, that's why I'd say let's combine
 both.

 Instead of always telling than a given idea won't work, let's try it
 and conclude afterwards.

   We have tried the whole freezing route. But feel free to try
  it out (like aj did Testing), and tell us how it would have worked.

The difference is that I don't want to throw Testing out. 

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Frank Küster
Jérôme Marant [EMAIL PROTECTED] schrieb:

 Colin Watson [EMAIL PROTECTED] writes:

 On Fri, Oct 22, 2004 at 02:48:01PM +0200, Jérôme Marant wrote:
 Joey Hess [EMAIL PROTECTED] writes:
  When we used to freeze unstable before a release, one of the problems
  was that many updates were blocked by that, and once the freeze was
  over, unstable tended to become _very_ unstable, and took months to get
  back into shape.
 
 What do you think we'd get by combining both (testing + unstable freeze)?

 My guess is that the release team would go insane having to approve
 every upload to unstable.

 I don't think so. Dinstall would reject any new upstream release.
 Approvals would only apply to t-p-u just like it is done
 currently.

Oh, it would be easy for me to break the tetex-packages (and cause lots
of FTBFS bugs) just by applying all the great ideas about improved
packaging that I have in mind. No upstream version needed for that.

Regards, Frank
-- 
Frank Küster
Inst. f. Biochemie der Univ. Zürich
Debian Developer




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Frank Küster [EMAIL PROTECTED] writes:


 I don't think so. Dinstall would reject any new upstream release.
 Approvals would only apply to t-p-u just like it is done
 currently.

 Oh, it would be easy for me to break the tetex-packages (and cause lots
 of FTBFS bugs) just by applying all the great ideas about improved
 packaging that I have in mind. No upstream version needed for that.

Come on, this is ridiculous. Of course, you can always cheat if you
want to. If we can't expect developers to be responsible people
at all, then we can shut the Debian project down.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Sat, 23 Oct 2004 06:54:17 +0200, Jérôme Marant [EMAIL PROTECTED] said: 


 Before testing, the RM used to freeze unstable and people were
 working on fixing bugs. There were pretest cycles with bug
 horizons,
 
 Not true. People were mostly twiddling their thumbs. Only a small
 subset of people can actually help in fixing RC bugs.

 Are you talking about skills?

Yes.  Recently, I tried fixing a selinux issue with
 dhcp3-client (closing file handles before forking). I spent a half
 day on it (usually enough for me to clean up a couple of packages I
 maintain and am familiar with). At the end of that time, I was still
 floundering around in the class and directory structure of dhcp3 I
 think it would take a couple of days to really come up to speed on a
 package like that). In the end, I just brought the issue to the
 attention of the maintainers, and left it at that.

Now, I have time to maintain my own packages (barely), but not
 enough to spend a few days on an one-off effort to fix a bug.  So I
 _can_ help improve Debian -- but only in small areas where I have
 already gained some expertise.

 and freezes were shorter.  Of course, without testing,
 synchronizing arches was a pain, that's why I'd say let's combine
 both.
 
 Instead of always telling than a given idea won't work, let's try
 it and conclude afterwards.
 
 We have tried the whole freezing route. But feel free to try it out
 (like aj did Testing), and tell us how it would have worked.

 The difference is that I don't want to throw Testing out.

Quite. But you have not mentioned how you are going to
 ameliorate the effect of closing down all development for a few
 months by shutting down unstable.

manoj
-- 
A penny saved has not been spent.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Sat, 23 Oct 2004 14:23:48 +0900, Mike Hommey [EMAIL PROTECTED] said: 

 And why not, instead of freezing unstable, make it build against
 testing, when er try to freeze testing ?

Libraries. If you build against a library version that is no
 longer in unstable, then you may have issues in testing when a new
 library tries to migrate into testing -- cause nowhere would there be
 packages built against the new library version.

Not to mention that unstable would become unviable as a
 distribution -- the run time libs may not be the ones that are needed
 by the packages in unstable.


 Okay, that's what t-p-u is roughly for, but the fact is that it's
 quite painful.

Could you elaborate on that? Why is it so painful?

manoj
-- 
Keep cool, but don't freeze. Hellman's Mayonnaise
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Sat, 23 Oct 2004 06:36:26 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Colin Watson [EMAIL PROTECTED] writes:
 On Fri, Oct 22, 2004 at 02:48:01PM +0200, Jérôme Marant wrote:
 Joey Hess [EMAIL PROTECTED] writes:
  When we used to freeze unstable before a release, one of the
  problems was that many updates were blocked by that, and once
  the freeze was over, unstable tended to become _very_ unstable,
  and took months to get back into shape.
 
 What do you think we'd get by combining both (testing + unstable
 freeze)?
 
 My guess is that the release team would go insane having to approve
 every upload to unstable.

 I don't think so. Dinstall would reject any new upstream release.
 Approvals would only apply to t-p-u just like it is done currently.

Umm. So no new debian native packages? Even though those are
 the ones we can best control? Also, this is a half-hearted
 solution. There is often a poor correlation between bugs and new
 upstream releases (in other words, I have screwed up packages in the
 past with my debian revision uploads far worse than any new upstream
 version). 

I still think you should look into testing-frozen and
 candidate distributions, locking down testing-frozen, and working
 towards improving candidate -- and that way, it is less intrusive,
 we'll  not have to scrap the current mechanism, and we can compare
 both methods all at the same time.

But that involves getting down, rolling up your sleeves, and
 doing _work_ -- rather than convincing other people to do it your
 way. The former is more likely to succeed.

manoj
-- 
Do students of Zen Buddhism do Om-work?
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

 I don't think so. Dinstall would reject any new upstream release.
 Approvals would only apply to t-p-u just like it is done currently.

 Umm. So no new debian native packages? Even though those are

Debian native packages are someway a special case.

  the ones we can best control? Also, this is a half-hearted
  solution. There is often a poor correlation between bugs and new
  upstream releases (in other words, I have screwed up packages in the
  past with my debian revision uploads far worse than any new upstream
  version). 

At least, stabilizing upstream releases would be an improvement, it
is called feature freeze.
Of course, you can always find a way to screw new debian revision.

 I still think you should look into testing-frozen and
  candidate distributions, locking down testing-frozen, and working
  towards improving candidate -- and that way, it is less intrusive,
  we'll  not have to scrap the current mechanism, and we can compare
  both methods all at the same time.

IIRC, Raphaël Hertzog already made such proposal in his DPL platform
two years ago. Are you refering to this? I recall he has been utterly
pissed of by the RM at that moment.

 But that involves getting down, rolling up your sleeves, and
  doing _work_ -- rather than convincing other people to do it your
  way. The former is more likely to succeed.

Ack.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

 Okay, that's what t-p-u is roughly for, but the fact is that it's
 quite painful.

   Could you elaborate on that? Why is it so painful?

Probably because you need maintain packages for both unstable and
testing at the same time. 

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

 Not true. People were mostly twiddling their thumbs. Only a small
 subset of people can actually help in fixing RC bugs.

 Are you talking about skills?

   Yes.  Recently, I tried fixing a selinux issue with
...
   Now, I have time to maintain my own packages (barely), but not
  enough to spend a few days on an one-off effort to fix a bug.  So I
  _can_ help improve Debian -- but only in small areas where I have
  already gained some expertise.

I agree that I would be able to fix glibc or gcc. But don't you think
this is marginal considering the number of packages in Debian?


 We have tried the whole freezing route. But feel free to try it out
 (like aj did Testing), and tell us how it would have worked.

 The difference is that I don't want to throw Testing out.

   Quite. But you have not mentioned how you are going to
  ameliorate the effect of closing down all development for a few
  months by shutting down unstable.

I've neither promised the Voodoo magic which would fix everything.
It wouldn't be necessary if everyone was properly taking care of
one's packages.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Manoj Srivastava [EMAIL PROTECTED] writes:

 Testing scripts are a gatekeeper against mistakes from unstable.
 Upload debian-specific changes to unstable doesn't necessarily mean
 there won't be side effects that shall not enter testing.

   Why not just leave freeze testing, and create an
  ultra-pending-release frozen candidate branch which is a gatekeeper
  against mistakes from testing.  Freeze testing instead.

I thought freezing testing was planned. That's the incremental
freeze which is confusing.

 Am I missing something in your (somewhat nebulous) proposal?

 Freezing unstable prevent people from uploading new upstream
 releases which desynchronizes unstable from testing and forces
 people to work with two distributions (and necessarily neglect one
 of them).

   How does this actually make testing become releaseable sooner,
  if testing is actually frozen? freeze testing, leave unstable alone,
  and create as many harder-frozen-ready-to-release candidate variants
  of testing you want.

Again, I thought it was planned by RMs.

   See, you don't really need people in power to do this: just
  create a fake-testing somewhere, and a fake-frozen, and see if things
  actually come together sooner that way.

I fail to see how I can prove anything that way.

 As soon as testing is strictly equal to unstable regarding package
 versions, testing is roughly ready for release.

   This may take forever. However, frozen-testing and
  frozen-candidate may fugue towards equivalence asymptotically.

It depends of the criteria of equality. You don't necessarily
want to be that strict.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Steve Langasek
On Sat, Oct 23, 2004 at 06:39:20AM +0200, Jérôme Marant wrote:
 Colin Watson [EMAIL PROTECTED] writes:

  Are you saying that technical choices do not contribute to the success
  of Canonical? For instance, deciding to target the distribution at
  most popular architectures only?

  In my experience as both a Canonical employee and a Debian developer,
  the number of architectures supported by Ubuntu makes a negligible
  difference to Ubuntu's ability to release.

 Nonetheless, you won't deny it makes things significantly slower.

By saying that it makes a negligible difference, he *did* deny that it makes
things significantly slower.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Matthew Garrett [EMAIL PROTECTED] writes:

 Jérôme Marant [EMAIL PROTECTED] wrote:

 Are you saying that technical choices do not contribute to the success
 of Canonical? For instance, deciding to target the distribution at
 most popular architectures only?

 Supporting a reduced range of both targets and software makes life
 slightly easier, yes. But I've no especially good reason to believe that
 they'd be less successful if they had a slightly larger staff and
 supported all our architectures.
 
Setting up build daemon seems to be easy. Finding skilled people
with some old architecture is not that easy. Supporting old architectures
also means helping developers with arch-specific bugs.

 It's not the technical issues with supporting multiple architectures
 that give us problems - it's the social issues surrounding access to
 buildds, incorporation into architectures, people failing to fix
 architecture specific bugs, people demanding that people fix
 architecture specific bugs, that sort of thing. It's undoubtedly true
 that we could release slightly faster with fewer architectures, but it's
 also true that we'd find something else to argue about in order to
 remove any advantage. 

As long as someone is interested in porting to a given architecture,
there is no reason not to support it. The question is whether developers
have to carry the burden. In other words, it doesn't have to
necessarily be release-candidate.

 I'd be insterested in hearing your point of view on the technical
 flaws as well.

 In Debian? I think what technical flaws there are are masked by other
 problems. We're actually spectacularly good at dealing with technical
 issues in comparison to most distributions.

Agreed.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Steve Langasek [EMAIL PROTECTED] writes:


 Nonetheless, you won't deny it makes things significantly slower.

 By saying that it makes a negligible difference, he *did* deny that it makes
 things significantly slower.

I forgot to add in Debian. No need to be harsh.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Steve Langasek
On Sat, Oct 23, 2004 at 12:35:11PM +0200, Jérôme Marant wrote:
 Steve Langasek [EMAIL PROTECTED] writes:

 Nonetheless, you won't deny it makes things significantly slower.

 By saying that it makes a negligible difference, he *did* deny that it makes
 things significantly slower.

 I forgot to add in Debian. No need to be harsh.

I'm not sure why you think it's harsh of me to refute a bald,
unsubstantiated assertion about what someone else believes -- which is what
your comment is, with or without the in Debian.  If Colin (who is in a
better position to judge this than I am) believes that the architecture
count in Ubuntu did not contribute significantly to the speed of their
release cycle, then he's clearly making a case that there's merely
*correlation* between the architecture count and the time to release, not
*causality*.

-- 
Steve Langasek
postmodern programmer


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Marco d'Itri
On Oct 22, Matthew Garrett [EMAIL PROTECTED] wrote:

 Canonical work because they consist of a small set of people that work
 together and who don't let egos get in the way. They work because they
 have a strong leader who provides firm direction. They work because they
 don't have the flaws Debian has - lack of communication, excessive
 self-importance and no strong feeling of what the fuck we're actually
 supposed to be doing. I don't see your solution or your method solving
 any of these issues. Building consensus helps with all of them. Consider
 investing your efforts in that, rather than refusing to discuss your
 opinions.
Amen. Of course, Canonical also works well because the ubuntu developers
are *payed* to work on it, so I suppose they tend to do quickly even
those boring tasks which most of us tend to procrastinate for long
periods.

-- 
ciao, |
Marco | [8705 coaGypez1rpsI]


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Steve Langasek [EMAIL PROTECTED] writes:

 I forgot to add in Debian. No need to be harsh.

 I'm not sure why you think it's harsh of me to refute a bald,
 unsubstantiated assertion about what someone else believes -- which is what
 your comment is, with or without the in Debian.  If Colin (who is in a
 better position to judge this than I am) believes that the architecture
 count in Ubuntu did not contribute significantly to the speed of their
 release cycle, then he's clearly making a case that there's merely
 *correlation* between the architecture count and the time to release, not
 *causality*.

Colin mentioned architectures supported by Ubuntu.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Mark Brown
On Sat, Oct 23, 2004 at 11:54:05AM +0200, Jérôme Marant wrote:
 Manoj Srivastava [EMAIL PROTECTED] writes:

  Could you elaborate on that? Why is it so painful?

 Probably because you need maintain packages for both unstable and
 testing at the same time. 

This is exactly what happened in the past when we forked off the frozen
release: you wound up maintaining both the frozen and unstable versions
of packages (unlike today it was possible to upload to both
simultaneously if there was as yet no reason to fork).  

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Mark Brown [EMAIL PROTECTED] writes:

 On Sat, Oct 23, 2004 at 11:54:05AM +0200, Jérôme Marant wrote:
 Manoj Srivastava [EMAIL PROTECTED] writes:

 Could you elaborate on that? Why is it so painful?

 Probably because you need maintain packages for both unstable and
 testing at the same time. 

 This is exactly what happened in the past when we forked off the frozen
 release: you wound up maintaining both the frozen and unstable versions
 of packages (unlike today it was possible to upload to both
 simultaneously if there was as yet no reason to fork).  

Yes, and everyone agrees it was far from ideal.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Mark Brown
On Sat, Oct 23, 2004 at 08:56:45AM +0200, Jérôme Marant wrote:
 Frank Küster [EMAIL PROTECTED] writes:

  Oh, it would be easy for me to break the tetex-packages (and cause lots
  of FTBFS bugs) just by applying all the great ideas about improved
  packaging that I have in mind. No upstream version needed for that.

 Come on, this is ridiculous. Of course, you can always cheat if you
 want to. If we can't expect developers to be responsible people
 at all, then we can shut the Debian project down.

The trouble is that much the same thing can be said about new upstream
releases.

-- 
You grabbed my hand and we fell into it, like a daydream - or a fever.




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Francesco Paolo Lovergine
On Sat, Oct 23, 2004 at 11:54:05AM +0200, Jérôme Marant wrote:
 Manoj Srivastava [EMAIL PROTECTED] writes:
 
  Okay, that's what t-p-u is roughly for, but the fact is that it's
  quite painful.
 
  Could you elaborate on that? Why is it so painful?
 
 Probably because you need maintain packages for both unstable and
 testing at the same time. 

Uh? We have pbulder and sbuild for that. What's so painful?

-- 
Francesco P. Lovergine




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Manoj Srivastava
On Sat, 23 Oct 2004 11:54:05 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Manoj Srivastava [EMAIL PROTECTED] writes:
 Okay, that's what t-p-u is roughly for, but the fact is that it's
 quite painful.
 
 Could you elaborate on that? Why is it so painful?

 Probably because you need maintain packages for both unstable and
 testing at the same time.

That is a simple branching issue in the version control
 system, no?

manoj
-- 
Whenever one finds oneself inclined to bitterness, it is a sign of
emotional failure.  -- Bertrand Russell
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Eduard Bloch
#include hallo.h
* Manoj Srivastava [Sat, Oct 23 2004, 12:27:03AM]:

  it.  This is how we fix problems in Debian: hide them, then propose
  General Resolutions.
 
  And your point is..?
 
   That a GR on technical issues is moronic?

Who declares them as technical issues?

  different ways to solve it. And before you think about writing
  another message, think about the reason for having the
  debian-private ML.
 
   It certainly is not to have moronic conversations like
  this. We should certainly not be hiding stupidity in Debian ranks.

Hahaha. It is pretty easy to say it is a technical because then you
can always say it is to maintainer|manager to deal with it, shut up.
That is what you prefer to do, IIRC.

Unfortunately, it is not that easy. Some decissions have to do with
technical issues but they are based on subjective decissions.
Social problems make people pissed and if the responsible people fail to
communicate, there is not much room for attribution.
And pissed developers act irrational, irrationality ends up in the
insanity which we can see on d-d in the last months.

In this respect, I think that Testing was a bad solution. A pseudo
solution for mixed social/technical problems that have been declared as
technical problems and the solution became a disaster.

Well, maybe it is just me. I am no exceptional case WRT the behaviour
analysis above.

Regards,
Eduard.
-- 
Die Vergeßlichkeit des Menschen ist etwas anderes als die Neigung
mancher Politiker, sich nicht erinnern zu können.
-- Marcel Mart




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Matthias Urlichs
Hi, Manoj Srivastava wrote:

 Secondly, buildd's do
  not work with experimental.

That can be fixed quite easily. In fact, my own (personal) buildds do it.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Matthias Urlichs
Hi, Eduard Bloch wrote:

 In this respect, I think that Testing was a bad solution. A pseudo
 solution for mixed social/technical problems that have been declared as
 technical problems and the solution became a disaster.

Actually, I disagree. The social problem of people don't like it when we
freeze Unstable was solved quite well by the technical solution we don't
need to freeze Unstable any more.

-- 
Matthias Urlichs   |   {M:U} IT Design @ m-u-it.de   |  [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Andreas Barth
* Matthias Urlichs ([EMAIL PROTECTED]) [041023 23:00]:
 Hi, Manoj Srivastava wrote:

  Secondly, buildd's do
   not work with experimental.
 
 That can be fixed quite easily. In fact, my own (personal) buildds do it.

Actually, I'm also building experimental packages, for mips, hppa, sparc
and alpha.


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Jrme Marant
Francesco Paolo Lovergine [EMAIL PROTECTED] writes:

 On Sat, Oct 23, 2004 at 11:54:05AM +0200, Jérôme Marant wrote:
 Manoj Srivastava [EMAIL PROTECTED] writes:
 
  Okay, that's what t-p-u is roughly for, but the fact is that it's
  quite painful.
 
 Could you elaborate on that? Why is it so painful?
 
 Probably because you need maintain packages for both unstable and
 testing at the same time. 

 Uh? We have pbulder and sbuild for that. What's so painful?

Testing the package. Running the distribution for real.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Matthew Palmer
On Sat, Oct 23, 2004 at 02:40:24PM -0500, Manoj Srivastava wrote:
 On Sat, 23 Oct 2004 11:54:05 +0200, J?r?me Marant [EMAIL PROTECTED] said: 
  Manoj Srivastava [EMAIL PROTECTED] writes:
  Okay, that's what t-p-u is roughly for, but the fact is that it's
  quite painful.
  
  Could you elaborate on that? Why is it so painful?
 
  Probably because you need maintain packages for both unstable and
  testing at the same time.
 
   That is a simple branching issue in the version control
  system, no?

A huge rush of air fills the list as hundreds of developers fill their
lungs to collectively say I don't use version control...

- Matt


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Mike Hommey
On Sat, Oct 23, 2004 at 12:14:45PM -0500, Manoj Srivastava wrote:
 On Sat, 23 Oct 2004 14:23:48 +0900, Mike Hommey [EMAIL PROTECTED] said: 
 
  And why not, instead of freezing unstable, make it build against
  testing, when er try to freeze testing ?
 
   Libraries. If you build against a library version that is no
  longer in unstable, then you may have issues in testing when a new
  library tries to migrate into testing -- cause nowhere would there be
  packages built against the new library version.

I don't see the point. If you build against what is in testing, there's
no issue when migrating to testing.
One particular issue would be when libraries change ABI, and new
packages would need to be built against them, but still, at that
particular time, the purpose being mainly to freeze testing, these 
ABI changes should be candidates for experimental.

   Not to mention that unstable would become unviable as a
  distribution -- the run time libs may not be the ones that are needed
  by the packages in unstable.

At that particular time, isn't frozen-testing the one that is supposed
to be a distribution ?

  Okay, that's what t-p-u is roughly for, but the fact is that it's
  quite painful.
 
   Could you elaborate on that? Why is it so painful?

On top of the problems mentionned by the other replies, the fact that
autobuilders have to be set up for t-p-u... can you remind me how long
sarge has been planned for freeze ? and for how long autobuilders are
required for alpha and mips for t-p-u ?

Mike




Re: Ubuntu discussion at planet.debian.org

2004-10-23 Thread Henrique de Moraes Holschuh
On Sun, 24 Oct 2004, Matthew Palmer wrote:
 On Sat, Oct 23, 2004 at 02:40:24PM -0500, Manoj Srivastava wrote:
  On Sat, 23 Oct 2004 11:54:05 +0200, J?r?me Marant [EMAIL PROTECTED] said: 
   Manoj Srivastava [EMAIL PROTECTED] writes:
   Okay, that's what t-p-u is roughly for, but the fact is that it's
   quite painful.
   
   Could you elaborate on that? Why is it so painful?
  
   Probably because you need maintain packages for both unstable and
   testing at the same time.
  
  That is a simple branching issue in the version control
   system, no?
 
 A huge rush of air fills the list as hundreds of developers fill their
 lungs to collectively say I don't use version control...

Is there really a developer out there that doesn't do even the most
rudimentary VC by keeping copies of all the source packages he has
uploaded/worked on ?

-- 
  One disk to rule them all, One disk to find them. One disk to bring
  them all and in the darkness grind them. In the Land of Redmond
  where the shadows lie. -- The Silicon Valley Tarot
  Henrique Holschuh




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Martin Schulze
Jérôme Marant wrote:
 It's too bad that interesting discussions take place in blogs rather
 than in Debian mailing lists, especially for those who don't blog
 but would like to participate.

Logbooks are suited for a lot, but not for discussions.  They're more
suited for experiences, statements and the like.

I'm thankful you're taking the discussion to this list, where probably
more people will be able participate as well.

 Scott James Remnant said something interesting about Ubuntu release
 management: Ubuntu people run the distribution that gets released,
 and the distribution is frozen until it's ready.

 
 Debian developers, on the contrary, run unstable and rarely run
 testing, which means that they don't really know about the shape
 of what they release.

Since testing is what unstable was (and many packages are the same in
both sid and sarge), this is often not the case when you look at
individual packages or groups of packages.

However, it is true that the developers often run the bleeding edge
suite since that's the development target most of the time.

 The Testing distribution helped a lot in release management,
 especially for synchronizing architectures.

Despite some problems that weren't dealt with (missing dependencies,
missing/unfulfilled source depenendencies) testing worked pretty well.

 Some improvements have already been proposed by Eduard Bloch and
 Adrian Bunk: freezing unstable while keeping testing.

It may pose a problem that development in unstable usually continues
while testing is frozen and only important bugs should be fixed.

However, if unstable would be frozen at the same time, would
development stop?  Probably not.  I'm pretty sure that several would
start with separate repositories and the like to make more recent
versions of the software available which they maintain.

We must not forget the focus on fixing the frozen distribution and
making it ready, though.

 Freezing unstable forces people to work on fixing bugs, and the
 quicker the bugs are fixed, the quicker the distribution is
 released and the quicker Debian people can start working on
 on the next release.

Freezeing unstable forces people not to do development in unstable.
It won't force people to fix bugs and the like.  Closing a motorway
won't stop people from driving (too) fast, it would stop people from
using the motorway for driving (too) fast instead.

Regards,

Joey

-- 
Reading is a lost art nowadays.  -- Michael Weber

Please always Cc to me when replying to me on the lists.




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Eduard Bloch
#include hallo.h
* Jérôme Marant [Fri, Oct 22 2004, 10:20:51AM]:

 Some improvements have already been proposed by Eduard Bloch and
 Adrian Bunk: freezing unstable while keeping testing.

Jerome, please, you could have asked me. I prepare an internal GR draft
for exactly this issue, but it is to be made public on the day of the
release, and better not before. We should concentrate on making the
Sarge release ready, NOW. Do not start another flamewar.

Regards,
Eduard.
-- 
yath bla. mach ichs halt als root.
erich yath's rechner Oh ja, machs mir als root!




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Jérôme Marant
Selon Eduard Bloch [EMAIL PROTECTED]:

 #include hallo.h
 * Jérôme Marant [Fri, Oct 22 2004, 10:20:51AM]:

  Some improvements have already been proposed by Eduard Bloch and
  Adrian Bunk: freezing unstable while keeping testing.

 Jerome, please, you could have asked me. I prepare an internal GR draft

I mentioned your name because the idea comes from you.

 for exactly this issue, but it is to be made public on the day of the
 release, and better not before. We should concentrate on making the
 Sarge release ready, NOW. Do not start another flamewar.

I do not intent to start a new flamewar. The discussion is happening
somewhere else anyway, and I think the subject deserves a wider
audience.

Cheers,

--
Jérôme Marant




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Matthew Garrett
Eduard Bloch [EMAIL PROTECTED] wrote:

 Jerome, please, you could have asked me. I prepare an internal GR draft
 for exactly this issue, but it is to be made public on the day of the
 release, and better not before. We should concentrate on making the
 Sarge release ready, NOW. Do not start another flamewar.

Is the entire world on crack and I just failed to notice until now?

-- 
Matthew Garrett | [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Jérôme Marant
Selon Martin Schulze [EMAIL PROTECTED]:

 I'm thankful you're taking the discussion to this list, where probably
 more people will be able participate as well.

I hope so.


[...]

  Some improvements have already been proposed by Eduard Bloch and
  Adrian Bunk: freezing unstable while keeping testing.

 It may pose a problem that development in unstable usually continues
 while testing is frozen and only important bugs should be fixed.

 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several would
 start with separate repositories and the like to make more recent
 versions of the software available which they maintain.

I think it would be marginal. After all, the experimental distribution
does exit for this purpose and nonetheless, people do not neglect
unstable.

 We must not forget the focus on fixing the frozen distribution and
 making it ready, though.

  Freezing unstable forces people to work on fixing bugs, and the
  quicker the bugs are fixed, the quicker the distribution is
  released and the quicker Debian people can start working on
  on the next release.

 Freezeing unstable forces people not to do development in unstable.
 It won't force people to fix bugs and the like.  Closing a motorway
 won't stop people from driving (too) fast, it would stop people from
 using the motorway for driving (too) fast instead.

Before testing, the RM used to freeze unstable and people were
working on fixing bugs. There were pretest cycles with bug horizons,
and freezes were shorter.
Of course, without testing, synchronizing arches was a pain,
that's why I'd say let's combine both.

Instead of always telling than a given idea won't work, let's
try it and conclude afterwards.

Cheers,

--
Jérôme Marant




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Romain Francoise
Matthew Garrett [EMAIL PROTECTED] writes:

 Jerome, please, you could have asked me. I prepare an internal GR draft
 for exactly this issue, but it is to be made public on the day of the
 release, and better not before. We should concentrate on making the
 Sarge release ready, NOW. Do not start another flamewar.

 Is the entire world on crack and I just failed to notice until now?

Don't worry, we're preparing an internal General Resolution to address
this crack problem, but you're not supposed to know about it.  This is
how we fix problems in Debian: hide them, then propose General
Resolutions.

-- 
  ,''`.
 : :' :Romain Francoise [EMAIL PROTECTED]
 `. `' http://people.debian.org/~rfrancoise/
   `-




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Antti-Juhani Kaijanaho
On 20041022T134825+0200, Jérôme Marant wrote:
 Before testing, the RM used to freeze unstable and people were
 working on fixing bugs. There were pretest cycles with bug horizons,
 and freezes were shorter.

That's not true (unless you are talking about something that was ceased
several years before testing became live, certainly before I started
following Debian development in 1998).  Before testing the RM used to
fork unstable into a frozen distribution.  Unstable was still open for
development, and heated arguments developed on this very list asking
that the process be changed so that unstable would be frozen; this was
never done.

I don't know what you mean by pretest cycles with bug horizons.

The current freeze has been quite short - if one ignores the current
delay by the missing testing security support - and pre-testing freezes
were not that much shorter (unless, again, one looks at ancient history.
when Debian was a lot smaller).

 Instead of always telling than a given idea won't work, let's
 try it and conclude afterwards.

The problem is that on this scale trying such things out is costly and
time-consuming.  Arguably were are still in the process of trying
testing.

-- 
Antti-Juhani Kaijanaho, Debian developer 

http://kaijanaho.info/antti-juhani/blog/en/debian


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Eduard Bloch
#include hallo.h
* Romain Francoise [Fri, Oct 22 2004, 06:04:12PM]:

  Is the entire world on crack and I just failed to notice until now?
 
 Don't worry, we're preparing an internal General Resolution to address
 this crack problem, but you're not supposed to know about it.  This is
 how we fix problems in Debian: hide them, then propose General
 Resolutions.

And your point is..? 

It is our right to hide things. We do not hide problems, we hide
possible solutions. The problem is well known, but there are different
ways to solve it. And before you think about writing another message,
think about the reason for having the debian-private ML.

Regards,
Eduard.
-- 
jjFux Wenn's beim (verdammt guten) Gefühl bei galon bleibt,
schicke ich opera morgen in Rente
LordYago jjFux: Fällt Opera unter den Generationenvertrag? ;-)




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Jrme Marant
Antti-Juhani Kaijanaho [EMAIL PROTECTED] writes:

 On 20041022T134825+0200, Jérôme Marant wrote:
 Before testing, the RM used to freeze unstable and people were
 working on fixing bugs. There were pretest cycles with bug horizons,
 and freezes were shorter.

 That's not true (unless you are talking about something that was ceased
 several years before testing became live, certainly before I started
 following Debian development in 1998).  Before testing the RM used to
 fork unstable into a frozen distribution.  Unstable was still open for
 development, and heated arguments developed on this very list asking
 that the process be changed so that unstable would be frozen; this was
 never done.

 I don't know what you mean by pretest cycles with bug horizons.


You are correct. It seems so old to me that I didn't even recall
it was a fork. This indeed explains why that process had to
be improved. It also explains why the current process needs to
be improved as well.

Thanks to Ubuntu, we now have a good example of what's proven
to work.

 The current freeze has been quite short - if one ignores the current
 delay by the missing testing security support - and pre-testing freezes
 were not that much shorter (unless, again, one looks at ancient history.
 when Debian was a lot smaller).

I was refering to the woody freeze.

 Instead of always telling than a given idea won't work, let's
 try it and conclude afterwards.

 The problem is that on this scale trying such things out is costly and
 time-consuming.  Arguably were are still in the process of trying
 testing.

I didn't says let's try it right now, and certainly not while trying
to release Sarge.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Joey Hess
Martin Schulze wrote:
 Logbooks are suited for a lot, but not for discussions.  They're more
 suited for experiences, statements and the like.
 
 I'm thankful you're taking the discussion to this list, where probably
 more people will be able participate as well.

Indeed..

 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several would
 start with separate repositories and the like to make more recent
 versions of the software available which they maintain.

When we used to freeze unstable before a release, one of the problems
was that many updates were blocked by that, and once the freeze was
over, unstable tended to become _very_ unstable, and took months to get
back into shape.

-- 
see shy jo


signature.asc
Description: Digital signature


Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread D. Starner
 And before you think about writing another message,
 think about the reason for having the debian-private ML.

The reason why debian-private exists is so people can
talk about sensitive issues without posting them on
the web, especially things involving personal or private
things between people. It's not so we can hide technical
discussions about non-security issues away from everyone.

  This is
  how we fix problems in Debian: hide them, then propose General
  Resolutions.

 And your point is..? 

I agree with him; not speaking for him, but...

That that's wrong. That GRs have been proposed way too much recently.
That we should discuss things long before we propose a GR, so that
even if it's formally necessary to have a GR, it's largely a moot
issue. That GR's are a last step, not a first one.

 It is our right to hide things.

Just because it's your right to hide things, doesn't mean that
you must or should.

 We do not hide problems, we hide
 possible solutions.

And that's _so_ much better.

David Starner -- [EMAIL PROTECTED]
-- 
___
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Otavio Salvador
|| On Fri, 22 Oct 2004 14:52:05 -0400
|| Joey Hess [EMAIL PROTECTED] wrote: 

jh Martin Schulze wrote:
 Logbooks are suited for a lot, but not for discussions.  They're more
 suited for experiences, statements and the like.
 
 I'm thankful you're taking the discussion to this list, where probably
 more people will be able participate as well.

jh Indeed..

 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several would
 start with separate repositories and the like to make more recent
 versions of the software available which they maintain.

jh When we used to freeze unstable before a release, one of the problems
jh was that many updates were blocked by that, and once the freeze was
jh over, unstable tended to become _very_ unstable, and took months to get
jh back into shape.

Sure but not we have the experimental distribution to deal with it
while we are stabilizing the unstable and testing distribution. The
current problem is experimental is not a full distribution and doesn't
have buildd systems.

-- 
O T A V I OS A L V A D O R
-
 E-mail: [EMAIL PROTECTED]  UIN: 5906116
 GNU/Linux User: 239058 GPG ID: 49A5F855
 Home Page: http://www.freedom.ind.br/otavio
-
Microsoft gives you Windows ... Linux gives
 you the whole house.




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Andreas Barth
* Otavio Salvador ([EMAIL PROTECTED]) [041022 22:15]:
 Sure but not we have the experimental distribution to deal with it
 while we are stabilizing the unstable and testing distribution. The
 current problem is experimental is not a full distribution and doesn't
 have buildd systems.

Actually, a lot of packages in experimental are autobuilded now (as long
as they are buildable in unstable, and only on some archs).


Cheers,
Andi
-- 
   http://home.arcor.de/andreas-barth/
   PGP 1024/89FB5CE5  DC F1 85 6D A6 45 9C 0F  3B BE F1 D0 C5 D1 D9 0C




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Eduard Bloch
#include hallo.h
* D. Starner [Fri, Oct 22 2004, 11:31:10AM]:
  And before you think about writing another message,
  think about the reason for having the debian-private ML.

And why do you move parts of my message around?! To place your part of
the answer in the beginning, to look more important?

But, hey, why t.f. do you not just go and fix some bugs instead of
writing another useless message? Maybe beginning with your own packages,
or looking at some RC bugs?

 The reason why debian-private exists is so people can
 talk about sensitive issues without posting them on
 the web, especially things involving personal or private
 things between people. It's not so we can hide technical
 discussions about non-security issues away from everyone.

Starting another mega-thread on controversial issue causes ill-feeling,
wastage of spare time and loss of productivity. And there are _always_
flamewars when something says GR draft or even postes a controversial
paper looking like a good GR candidate.
Or do you really believe that mega-threads help much? Do you really
think that Canonical/Ubuntu is more successfull because they discuss
more and let everyone publish its 0.02$ that everybody needs to read? Do
you really think that the explosion of redudant messages in mega-threads
is productive?

 That that's wrong. That GRs have been proposed way too much recently.

Exactly. That is why I am not going to release a half-done paper. It is
better to be discussed in a small circle. The GR drafts posted in the
last months caused something I wish to avoid - fscking huge flamewars.

  We do not hide problems, we hide
  possible solutions.
 
 And that's _so_ much better.

If we get more important things done first - yes.

And from now, I will refuse to answer to anything posted to this
subthread.

Regards,
Eduard.
-- 
stockholm Overfiend: why dont you flame him? you are good at that.
Overfiend I have too much else to do.




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Jrme Marant
Joey Hess [EMAIL PROTECTED] writes:

 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several would
 start with separate repositories and the like to make more recent
 versions of the software available which they maintain.

 When we used to freeze unstable before a release, one of the problems
 was that many updates were blocked by that, and once the freeze was
 over, unstable tended to become _very_ unstable, and took months to get
 back into shape.

What do you think we'd get by combining both (testing + unstable freeze)?

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Romain Francoise
Eduard Bloch [EMAIL PROTECTED] writes:

 And your point is..? 

..lost on you, obviously.

 It is our right to hide things. We do not hide problems, we hide
 possible solutions.

This is ludicrous.

 And before you think about writing another message, think about the
 reason for having the debian-private ML.

I am well aware of the reason why we have this list and it is entirely
irrelevant to this discussion.

Let's end this farce: I will wait for your secret GR to be proposed,
then we can have a more productive discussion.  In the meantime, if the
burden of keeping this miracle solution to yourself gets too heavy, feel
free to share it with us mere mortals on a Debian list of your choice.

Cheers,

-- 
  ,''`.
 : :' :Romain Francoise [EMAIL PROTECTED]
 `. `' http://people.debian.org/~rfrancoise/
   `-




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Matthew Garrett
Eduard Bloch [EMAIL PROTECTED] wrote:

 Or do you really believe that mega-threads help much? Do you really
 think that Canonical/Ubuntu is more successfull because they discuss
 more and let everyone publish its 0.02$ that everybody needs to read? Do
 you really think that the explosion of redudant messages in mega-threads
 is productive?

Canonical work because they consist of a small set of people that work
together and who don't let egos get in the way. They work because they
have a strong leader who provides firm direction. They work because they
don't have the flaws Debian has - lack of communication, excessive
self-importance and no strong feeling of what the fuck we're actually
supposed to be doing. I don't see your solution or your method solving
any of these issues. Building consensus helps with all of them. Consider
investing your efforts in that, rather than refusing to discuss your
opinions.

-- 
Matthew Garrett | [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Jrme Marant
Matthew Garrett [EMAIL PROTECTED] writes:

 Canonical work because they consist of a small set of people that work
 together and who don't let egos get in the way. They work because they
 have a strong leader who provides firm direction. They work because they
 don't have the flaws Debian has - lack of communication, excessive
 self-importance and no strong feeling of what the fuck we're actually
 supposed to be doing. I don't see your solution or your method solving
 any of these issues. Building consensus helps with all of them. Consider
 investing your efforts in that, rather than refusing to discuss your
 opinions.

Are you saying that technical choices do not contribute to the success
of Canonical? For instance, deciding to target the distribution at
most popular architectures only?
I'd be insterested in hearing your point of view on the technical
flaws as well.

Thanks.

-- 
Jérôme Marant

http://marant.org




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Matthew Garrett
Jérôme Marant [EMAIL PROTECTED] wrote:

 Are you saying that technical choices do not contribute to the success
 of Canonical? For instance, deciding to target the distribution at
 most popular architectures only?

Supporting a reduced range of both targets and software makes life
slightly easier, yes. But I've no especially good reason to believe that
they'd be less successful if they had a slightly larger staff and
supported all our architectures.

It's not the technical issues with supporting multiple architectures
that give us problems - it's the social issues surrounding access to
buildds, incorporation into architectures, people failing to fix
architecture specific bugs, people demanding that people fix
architecture specific bugs, that sort of thing. It's undoubtedly true
that we could release slightly faster with fewer architectures, but it's
also true that we'd find something else to argue about in order to
remove any advantage. 

 I'd be insterested in hearing your point of view on the technical
 flaws as well.

In Debian? I think what technical flaws there are are masked by other
problems. We're actually spectacularly good at dealing with technical
issues in comparison to most distributions.

-- 
Matthew Garrett | [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Colin Watson
On Fri, Oct 22, 2004 at 02:48:01PM +0200, Jérôme Marant wrote:
 Joey Hess [EMAIL PROTECTED] writes:
  When we used to freeze unstable before a release, one of the problems
  was that many updates were blocked by that, and once the freeze was
  over, unstable tended to become _very_ unstable, and took months to get
  back into shape.
 
 What do you think we'd get by combining both (testing + unstable freeze)?

My guess is that the release team would go insane having to approve
every upload to unstable.

Before you say it, it's much easier to do this sort of thing in Ubuntu
because we have a small enough team that we don't have to lock down the
archive during freezes, but instead just say don't upload without
approval. In Debian, we've seen many times (e.g. when trying to get
large groups of interdependent packages into testing) that not all
developers can be assumed to have read announcements or will agree with
the procedure, and I think we could expect many unapproved uploads if we
tried such an open procedure; so we'd have to lock down the archive
using technical measures.

The result of this is that the load on the Debian release team if we
tried this would be significantly higher than the load on their Ubuntu
counterparts, not even counting the order of magnitude increase in the
number of packages involved. I doubt we'd be able to get much else done
at all without increasing the size of the team to the point where
effective coordination became impossible.

Cheers,

-- 
Colin Watson   [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Colin Watson
On Fri, Oct 22, 2004 at 03:53:28PM +0200, Jérôme Marant wrote:
 Matthew Garrett [EMAIL PROTECTED] writes:
  Canonical work because they consist of a small set of people that work
  together and who don't let egos get in the way. They work because they
  have a strong leader who provides firm direction. They work because they
  don't have the flaws Debian has - lack of communication, excessive
  self-importance and no strong feeling of what the fuck we're actually
  supposed to be doing. I don't see your solution or your method solving
  any of these issues. Building consensus helps with all of them. Consider
  investing your efforts in that, rather than refusing to discuss your
  opinions.
 
 Are you saying that technical choices do not contribute to the success
 of Canonical? For instance, deciding to target the distribution at
 most popular architectures only?

In my experience as both a Canonical employee and a Debian developer,
the number of architectures supported by Ubuntu makes a negligible
difference to Ubuntu's ability to release.

Cheers,

-- 
Colin Watson   [EMAIL PROTECTED]




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Steve Greenland
On 22-Oct-04, 05:25 (CDT), J?r?me Marant [EMAIL PROTECTED] wrote: 
 Thanks to Ubuntu, we now have a good example of what's proven
 to work.

Yes, pay 30 (40?) developers to work fulltime on stabilizing a subset
of Debian. Somehow I don't think that's going to work for the Debian
Project.

Steve

-- 
Steve Greenland
The irony is that Bill Gates claims to be making a stable operating
system and Linus Torvalds claims to be trying to take over the
world.   -- seen on the net




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread D. Starner
 But, hey, why t.f. do you not just go and fix some bugs instead of
 writing another useless message? Maybe beginning with your own packages,
 or looking at some RC bugs?

To avoid a flame war, you curse at me, flame me, tell me what do and
to boot are hypocritical in the last part (as you too are writing a
useless message.) Perhaps you should try politeness to avoid a flame
war.

David Starner -- [EMAIL PROTECTED]
-- 
___
Sign-up for Ads Free at Mail.com
http://promo.mail.com/adsfreejump.htm




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Manoj Srivastava
On Fri, 22 Oct 2004 16:56:31 -0300, Otavio Salvador [EMAIL PROTECTED] said: 

jh When we used to freeze unstable before a release, one of the
jh problems was that many updates were blocked by that, and once the
jh freeze was over, unstable tended to become _very_ unstable, and
jh took months to get back into shape.

 Sure but not we have the experimental distribution to deal with it

We've always had experimental. But consider this: experimental
 contains packages _known_ to be volatile, and nobody sane has
 experimental turned on for their boxes (most people cherry pick a
 package or two that they are interested in).  Secondly, buildd's do
 not work with experimental.

 while we are stabilizing the unstable and testing distribution. The
 current problem is experimental is not a full distribution and
 doesn't have buildd systems.

That too. If packages don't get tested, you have indeed
 arrested development.

manoj
-- 
Death before dishonor.  But neither before breakfast.
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Manoj Srivastava
On Fri, 22 Oct 2004 14:48:01 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Joey Hess [EMAIL PROTECTED] writes:
 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several
 would start with separate repositories and the like to make more
 recent versions of the software available which they maintain.
 
 When we used to freeze unstable before a release, one of the
 problems was that many updates were blocked by that, and once the
 freeze was over, unstable tended to become _very_ unstable, and
 took months to get back into shape.

 What do you think we'd get by combining both (testing + unstable
 freeze)?

If you freeze unstable anyway, you are blocking the updates --
 and thus have all the problems of this style of interrupted
 development. If unstable is frozen, what is the point of Testing?

Am I missing something in your (somewhat nebulous) proposal?

manoj
-- 
The new Linux anthem will be He's an idiot, but he's ok, as
performed byMonthy Python.  You'd better start practicing. -- Linus
Torvalds, announcing another kernel patch
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Manoj Srivastava
On Fri, 22 Oct 2004 10:20:51 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Debian developers, on the contrary, run unstable and rarely run
 testing, which means that they don't really know about the shape of
 what they release.

The reason I run unstable is because tat is where I upload
 to -- and that is where the shared libs are that my packages use, and
 that is where I work out the bugs experienced. However, testing does
 not seem to be too far off from unstable in the packages I use a
 lot. 

 The Testing distribution helped a lot in release
 management, especially for synchronizing architectures.  Some
 improvements have already been proposed by Eduard Bloch and Adrian
 Bunk: freezing unstable while keeping testing.  Freezing unstable
 forces people to work on fixing bugs, and the quicker the bugs are
 fixed, the quicker the distribution is released and the quicker

This is a fallacy.  In the past, when we did freeze unstable,
 it never forced me to do anything but twidle my thumbs for months
 until things got moving again. The reason that freezing unstable did
 not make me fix any more bugs, since the bugs were not in packages I
 was in any way an expert in.

Freezes just used to be a frustrating, prolonged period in
 which I did no Debian work at all, waiting for unstable to thaw back
 out.

manoj
-- 
The geeks shall inherit the earth. Karl Lehenbauer
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C




Re: Ubuntu discussion at planet.debian.org

2004-10-22 Thread Manoj Srivastava
On Fri, 22 Oct 2004 13:48:25 +0200, Jérôme Marant [EMAIL PROTECTED] said: 

 Selon Martin Schulze [EMAIL PROTECTED]:
 I'm thankful you're taking the discussion to this list, where
 probably more people will be able participate as well.

 I hope so.

 [...]

  Some improvements have already been proposed by Eduard Bloch and
  Adrian Bunk: freezing unstable while keeping testing.
 
 It may pose a problem that development in unstable usually
 continues while testing is frozen and only important bugs should be
 fixed.
 
 However, if unstable would be frozen at the same time, would
 development stop?  Probably not.  I'm pretty sure that several
 would start with separate repositories and the like to make more
 recent versions of the software available which they maintain.

 I think it would be marginal. After all, the experimental
 distribution does exit for this purpose and nonetheless, people do
 not neglect unstable.

I do not think you understand what the experimental
 distribution is, and how it is different from unstable, if you can
 say that. (not a full distribution, contains truly volatile packages,
 not supported by buildd's, for a start).

 We must not forget the focus on fixing the frozen distribution and
 making it ready, though.
 
  Freezing unstable forces people to work on fixing bugs, and the
  quicker the bugs are fixed, the quicker the distribution is
  released and the quicker Debian people can start working on on
  the next release.
 
 Freezeing unstable forces people not to do development in unstable.
 It won't force people to fix bugs and the like.  Closing a motorway
 won't stop people from driving (too) fast, it would stop people
 from using the motorway for driving (too) fast instead.

 Before testing, the RM used to freeze unstable and people were
 working on fixing bugs. There were pretest cycles with bug horizons,

Not true. People were mostly twiddling their thumbs. Only a
 small subset of people can actually help in fixing RC bugs.

 and freezes were shorter.  Of course, without testing,
 synchronizing arches was a pain, that's why I'd say let's combine
 both.

 Instead of always telling than a given idea won't work, let's try it
 and conclude afterwards.

We have tried the whole freezing route. But feel free to try
 it out (like aj did Testing), and tell us how it would have worked.


manoj
-- 
Ha. I say let them try -- even vi+perl couldn't match the power of an
editor which is, after all, its own OS.  ;-) -- Johnie Ingram on
debian-devel
Manoj Srivastava   [EMAIL PROTECTED]  http://www.debian.org/%7Esrivasta/
1024D/BF24424C print 4966 F272 D093 B493 410B  924B 21BA DABB BF24 424C