Re: Temporal Release Strategy

2005-04-22 Thread Adrian Bunk
On Fri, Apr 22, 2005 at 12:21:49PM -0400, Patrick Ouellette wrote:
 On Wed, Apr 20, 2005 at 04:56:32AM +0200, Adrian Bunk wrote:
  The rules and goals of testing are clear.
  
  The more interesting points are the problems of testing that several 
  years of using it have shown.
  
   If package FOO has a RC bug, then everything that depends on FOO will be
   stuck AT WHATEVER POINT IT IS IN THE PROCESS until FOO is fixed.  If
   fixing FOO breaks BAR, then they all wait again until BAR is fixed.  Use
   of experimental to work through some of these issues would help.
   I'm not saying it won't take manual coordination to handle complex
   changes to the system.  I'm not saying it will make anyone's life
   easier.  What my proposal will do is provide the ability to decide when
   package $PACKAGE makes it into stable, we will call that an official
   release and give it a number.  Alternatively, you could declare every
   $INTERVAL Debian releases.  What is in stable should have been well
   tested, and supportable.  Stable no longer is a static concept, but a
   slowly evolving thing.  If you cannot wrap your mind around to accepting
   a stable that evolves, we could snapshot stable at release data and make
   a separate archive (really a Packages.gz and related files as long as
   the version of the package in the release exists in the package pool).
  
  You completely miss my point:
  
  There are several transitions every month, and a big transition can 
  involve several hundred packets.
  
  Your proposal requires, that _every single_ package that is part of a 
  transition has to be both ready and in testing for over 3 months before 
  it can enter your proposed candidate.
  
  If _one_ of the packages that is part of a transition is updated in 
  testing during this time, the 3 months start again. For bigger 
  transitions, it's therefore practically impossible that they will be 
  able to enter your candidate.
 
 I don't believe I missed your point, you just don't seem to be able to
 grasp the fact that I intend candidate to change slowly. 
 
 Yes, I am proposing that every package involved in a transition be of
 adequate quality to be promoted to candidate.  The purpose of the entire
 release system is to ensure the quality of the Debian distribution.
 Debian releases when it's ready because Debian demands a certain
 minimum level of quality (currently defined as an arbitrary number of RC
 bugs in packages of variable importance in the distribution as seen by
 the release manager).  I'm proposing a system that allows when it's
 ready to be defined and automated.  Our current release system places
 an enormous burden on the release manager.
 
  
  Please try to understand the limitations of testing before proposing 
  something even stricter.
  
 
 I understand the limitations of testing.  In fact, I am depending on the
 limitations of the testing rules to ensure that candidate is of adequate
 quality and changes slowly enough to be used on desktop workstations and
 that stable is adequate for servers.


The problem is that for many transitions, slowly means never, since 
the criteria you set are unlikely to be fulfilled for all parts of such 
a transition at any time in the future.

And the more time passes, it becomes more and more complicated since 
additional transitions might be interdependent with a transition making 
the problem even more complicated.


 I am proposing a system that removes some of the arbitrary nature of
 what we call a stable package.  I'm proposing that we define QUALITY
 CONTROL standards that ALL packages adhere to so that when someone says
 they recommend Debian's testing/candidate/stable release, they can point to a
 testing system that allows the person to select which branch they use
 based upon well know published criteria for the stability of that
 particular branch.  The user controls the amount of risk they are
 willing to have in their system.


That's already true today.

People who like the latest software can choose between unstable and 
testing with testing usually having a bit less known bugs.

People who want stability use stable.


 Testing, candidate and stable should change progressively slower.  That
 is the entire point.


As I am trying to explain, the speed of changes to stable will sonn 
become zero.


If you believe your approach would work, please try the following:

Take stable from today, and make this your candidate.
Take testing from today.

Create a complete list of all packages that have to go from testing into 
your candidate _at the same time_ for getting e.g. the tiff transition 
into your candidate.

If after this you do still believe your approch would work, please send 
the complete list of packages you think would be involved in this one 
transition (to let me check whether you missed some - they are much more 
than hundred), and explain at which time of the future you expect 
every single package in the list to fulfill your 

Re: Temporal Release Strategy

2005-04-22 Thread Patrick Ouellette
On Mon, Apr 18, 2005 at 07:37:40PM -0500, Gunnar Wolf wrote:
 Patrick Ouellette dijo [Sat, Apr 16, 2005 at 01:04:59AM -0400]:
  (...)
  Another difference is that testing will get new versions of packages and
  those versions might (but should not) cause breakage.  Testing has had
  breakage issues in the past.  Ten days is not enough time to catch all
  the possible interactions (or even the majority of them).  I'm also not
  naive enough to think that my proposed candidate step will never cause
  breakage.  The purpose of the additional step is to have a place where
  things change slower than testing to catch more of the obscure bugs that
  only become apparent with more time.  By requiring there be 0 RC bugs to
  progress from testing to candidate and candidate to stable we
  cause stable to change when the software really stabalizes, not at an
  arbitrary time selected by the release team. 
 
 Umh... And... Well, if a RC bug is found in candidate, will it take (a
 very minimum of) one month for the fix to get there? 

Yes, that is true.  It will take time for the fix to work through the
system, and there is also the possibility of finding additional RC bugs
in the candidate version that further delay the cycle.  That's how the
iterative develop-test-release cycle works.

 
 Don't you think that, during the release cycle (and specially during
 its first phase after a release) we will always have one RC bug
 keeping a second one from getting fixed?


If that is indeed the case, no software would ever be released.

The trick is to make the number of known RC bugs at the time a package
moves from one stage to the next 0.  If a bug truly is release
critical, then that package should not be in the release while it
is known to contain that bug.

Pat

-- 

Patrick Ouellette
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Amateur Radio: KB8PYM 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-22 Thread Eduard Bloch
Moin Patrick!
Patrick Ouellette schrieb am Freitag, den 22. April 2005:

  And the more time passes, it becomes more and more complicated since 
  additional transitions might be interdependent with a transition making 
  the problem even more complicated.
  
 
 You are very good at repeating this will never work.  You are

Please, do the math (literally, the particular are is stochastics).

  People who want stability use stable.
  
 It is not true today.  What is true is people who are not running
 hardware less than 36 (or so) months old have the option of running

the rest of usual standard bitching about woody-out-of-date-ness deleted

 People are not able to choose by their desired comfort level of
 stability.  If anything, my proposal might allow people to choose which
 version they want to run based on their desired level of stability -
 instead of what will run on their hardware.

And I still cannot see the innovation in your idea. It is basically a
second testing with stronger conditions - and the current one has
already failed in respect of the original requirements.

Regards,
Eduard.
-- 
stockholm Overfiend: why dont you flame him? you are good at that.
Overfiend I have too much else to do.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-22 Thread Patrick Ouellette
On Fri, Apr 22, 2005 at 07:16:30PM +0200, Adrian Bunk wrote:
 
 The problem is that for many transitions, slowly means never, since 
 the criteria you set are unlikely to be fulfilled for all parts of such 
 a transition at any time in the future.
 
 And the more time passes, it becomes more and more complicated since 
 additional transitions might be interdependent with a transition making 
 the problem even more complicated.
 

You are very good at repeating this will never work.  You are
essentially saying it is impossible for a package to have no RC bugs,
and that those bugs are never going to be fixed fast enough to progress
through the quality control system I proposed.  I have a bit more faith
in my fellow Debian Developers than that.

I admit that the candidate phase will change more slowly than testing -
it is supposed to.  The stable (or whatever it is called - maybe
production) section will change even more slowly.  This is by design.
 
  I am proposing a system that removes some of the arbitrary nature of
  what we call a stable package.  I'm proposing that we define QUALITY
  CONTROL standards that ALL packages adhere to so that when someone says
  they recommend Debian's testing/candidate/stable release, they can point to 
  a
  testing system that allows the person to select which branch they use
  based upon well know published criteria for the stability of that
  particular branch.  The user controls the amount of risk they are
  willing to have in their system.
 
 
 That's already true today.
 
 People who like the latest software can choose between unstable and 
 testing with testing usually having a bit less known bugs.
 
 People who want stability use stable.
 
It is not true today.  What is true is people who are not running
hardware less than 36 (or so) months old have the option of running
stable (the kernel shipping and included with stable just does not have
drivers for new hardware).  This has been a perpetual problem.

People who need a stable distribution should not be forced to use
testing or unstable because they have hardware that is only 18 months
old, especially when you consider the pace of change in computer
hardware manufacturing.

The reality today is people who have older hardware can choose to run
Debian stable.  People with newer but by no means cutting edge hardware
do not have the option of installing stable.  They can choose testing or
unstable.

People who want security updates from the Debian security team must run
stable.  If you want security fixes and have newer hardware, you must
run unstable (and hope the maintainer uploads a fixed version quickly)
or patch the testing packages yourself.

People are not able to choose by their desired comfort level of
stability.  If anything, my proposal might allow people to choose which
version they want to run based on their desired level of stability -
instead of what will run on their hardware.

 
  Testing, candidate and stable should change progressively slower.  That
  is the entire point.
 
 
 As I am trying to explain, the speed of changes to stable will sonn 
 become zero.

The speed of changes to stable is currently zero.  Debian does not have
to do anything to maintain that.  My proposal would at the very least
change that from zero to glacially slow, with the option to pick a
version that changes slow, fast, or continuously.
 
 
 If you believe your approach would work, please try the following:
 
 Take stable from today, and make this your candidate.
 Take testing from today.


Actually, I am planning on working on that this weekend.  I was not
going to start with the current stable, but with the current testing.  I
will be building a candidate list by using my proposed rules (0 RC bugs,
3 months or more in testing).

I will build a new stable from the candidate list with those packages
that have been in testing 6 or more months with 0 RC bugs.

It will be interesting to see how many required, base, standard, and
optional packages meet the standard I propose.

 Create a complete list of all packages that have to go from testing into 
 your candidate _at the same time_ for getting e.g. the tiff transition 
 into your candidate.
 
 If after this you do still believe your approach would work, please send 
 the complete list of packages you think would be involved in this one 
 transition (to let me check whether you missed some - they are much more 
 than hundred), and explain at which time of the future you expect 
 every single package in the list to fulfill your criteria at the same 
 time.
 

I will publish the results on my people.Debian.org page at
http://people.debian.org/~pouelle/temporal_release.html

Look for that URL to be updated by 13:00 25-APR-2005 UTC

I will not be able to explain at which time I expect a particular
package to meet the standards since I don't maintain each and every
package in Debian.  Debian always releases when it's ready and I don't
expect that to change.


Pat

-- 

Patrick 

Re: Temporal Release Strategy

2005-04-22 Thread Patrick Ouellette
On Wed, Apr 20, 2005 at 04:56:32AM +0200, Adrian Bunk wrote:
 The rules and goals of testing are clear.
 
 The more interesting points are the problems of testing that several 
 years of using it have shown.
 
  If package FOO has a RC bug, then everything that depends on FOO will be
  stuck AT WHATEVER POINT IT IS IN THE PROCESS until FOO is fixed.  If
  fixing FOO breaks BAR, then they all wait again until BAR is fixed.  Use
  of experimental to work through some of these issues would help.
  I'm not saying it won't take manual coordination to handle complex
  changes to the system.  I'm not saying it will make anyone's life
  easier.  What my proposal will do is provide the ability to decide when
  package $PACKAGE makes it into stable, we will call that an official
  release and give it a number.  Alternatively, you could declare every
  $INTERVAL Debian releases.  What is in stable should have been well
  tested, and supportable.  Stable no longer is a static concept, but a
  slowly evolving thing.  If you cannot wrap your mind around to accepting
  a stable that evolves, we could snapshot stable at release data and make
  a separate archive (really a Packages.gz and related files as long as
  the version of the package in the release exists in the package pool).
 
 You completely miss my point:
 
 There are several transitions every month, and a big transition can 
 involve several hundred packets.
 
 Your proposal requires, that _every single_ package that is part of a 
 transition has to be both ready and in testing for over 3 months before 
 it can enter your proposed candidate.
 
 If _one_ of the packages that is part of a transition is updated in 
 testing during this time, the 3 months start again. For bigger 
 transitions, it's therefore practically impossible that they will be 
 able to enter your candidate.

I don't believe I missed your point, you just don't seem to be able to
grasp the fact that I intend candidate to change slowly. 

Yes, I am proposing that every package involved in a transition be of
adequate quality to be promoted to candidate.  The purpose of the entire
release system is to ensure the quality of the Debian distribution.
Debian releases when it's ready because Debian demands a certain
minimum level of quality (currently defined as an arbitrary number of RC
bugs in packages of variable importance in the distribution as seen by
the release manager).  I'm proposing a system that allows when it's
ready to be defined and automated.  Our current release system places
an enormous burden on the release manager.

 
 Please try to understand the limitations of testing before proposing 
 something even stricter.
 

I understand the limitations of testing.  In fact, I am depending on the
limitations of the testing rules to ensure that candidate is of adequate
quality and changes slowly enough to be used on desktop workstations and
that stable is adequate for servers.

I am proposing a system that removes some of the arbitrary nature of
what we call a stable package.  I'm proposing that we define QUALITY
CONTROL standards that ALL packages adhere to so that when someone says
they recommend Debian's testing/candidate/stable release, they can point to a
testing system that allows the person to select which branch they use
based upon well know published criteria for the stability of that
particular branch.  The user controls the amount of risk they are
willing to have in their system.

Testing, candidate and stable should change progressively slower.  That
is the entire point.

Pat

-- 

Patrick Ouellette
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Amateur Radio: KB8PYM 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-22 Thread Patrick Ouellette
On Thu, Apr 21, 2005 at 01:04:34AM +0200, Adrian Bunk wrote:
 Let my try to explain it:
 
 
 The debian stable == obsolete is a release management problem of 
 Debian. One release every year and it would be suitable for most 
 purposes.


This is the problem.  Debian has NEVER been able to have a release every
year.  Most server administrators I know would prefer a release cycle
longer than 12 months, most desktop users would prefer around 12-24
months.

The issue has always been one of how many RC bugs are acceptable in the
release and this has always been at the discretion of the release
manager.

 
 You say you've deployed Debian sarge and sid in server environments 
 (even sarge, although months old security fixes might be missing???).
 
 Let me ask some questions:
 - How many thousand people can't continue working if the server isn't
   available?
For comparative purposes, I have worked as systems/network/admin where
the number has been as small as 50 and as large as 30,000.

 - How many million dollar does the customer lose every day the server is
   not available?

We measured it in millions of dollars per hour, not day.

 - How many days without this server does it take until the company is
   bankrupt?

We never got to that point, because it was simply not an option.

 
 
 If the mail server of a small company isn't running for a few hours it's 
 not a problem - but there are also other environments.
 

Since you seem to be trolling, I'll feed the troll:  If that small
company relies on the email server to take orders from  customers, that
few hour outage could translate into a large amount of money.  If that
small company is not financially sound, that few hour outage may be the
cause of that small business failing.  Large organizations are much
better equipped to weather a temporary outage (larger cash reserves,
ability to implement backup  systems, etc).

 
 Regarding things broken in woody:
 
 In many environments, the important number is not the total number of 
 bugs but the number of regressions. Doing intensive tests once when you 
 install/upgrade the machine is acceptable, but requiring this every 
 month because it's required for the security updates that bring new 
 upstream releases is not acceptable.
 

This is the current norm now if you take seriously your system's
stability.  If you really value stability, you will test each and every
application on the system each time a change is made.  You would
be examining each security patch to make sure you understood what was
happening and that it was safe before you installed it.

 
  Look at the third use case I explained above. For these users of Debian, 
  long-living releases where the _only_ changes are security fixes are 
  _very_ important.
  
  Again, I don't think you ever built a commercial product around Linux 
  based on your statements here. No offence if you have, maybe it's just 
  corporate culture differences between the EU and US?
 
 
 There are reasons why companies pay several thousand dollars licence 
 fees for every computer they run the enterprise version of some 
 distribution on. E.g. RedHat supports each version of their enterprice 
 edition for seven years. A few thousand dollars are _nothing_ compared 
 to the support costs and man months that have to be put into setting up 
 and testing the system.
 
So it should be no problem for those companies who choose to run Debian
to forward a small donation to Debian for all the thousands they save.
Or maybe they should allow their staff to spend several hours a week
getting paid to contribute to Debian.

My point is Debian is NOT a corporate product.  If it is found to be
useful by corporations that's great for them.  If the corporations want
to run Debian, there are companies that offer similar support for Debian
that RedHat and Novell offer for their respective distros.

Since Debian is not a corporate product, Debian is free to investigate
and try different strategies without worrying about the monetary impact
of those changes in the same way a corporate distribution has to.  We
can innovate because it makes sense, not because it is good for the
bottom line.

 
 Debian stable is ancient - but that's something you have to ask the 
 Debian release management about. If the officially announced release 
 date for sarge is now missed by more than one and a half years this is 
 the issue where investigation should take place.
 

Which is the issue I was attempting to suggest a possible solution to.

 Regarding sarge:
 
 I do personally know people who had serious mail loss due to #220983. At 
 the time I reported this bug, it was present in sarge. This problem 
 couldn't have happened in a Debian stable (because it would have been 
 discovered before the release would have been declared stable). This 

This is the biggest delusion I have ever heard.  Any piece of software
can have a critical and undiscovered bug.  Just because it was not
discovered before someone 

Re: Temporal Release Strategy

2005-04-22 Thread Adrian Bunk
On Fri, Apr 22, 2005 at 12:02:39PM -0400, Patrick Ouellette wrote:
 On Thu, Apr 21, 2005 at 01:04:34AM +0200, Adrian Bunk wrote:
  Let my try to explain it:
  
  
  The debian stable == obsolete is a release management problem of 
  Debian. One release every year and it would be suitable for most 
  purposes.
 
 This is the problem.  Debian has NEVER been able to have a release every
 year.  Most server administrators I know would prefer a release cycle
 longer than 12 months, most desktop users would prefer around 12-24
 months.


But this peoblem is not solved by your proposal (and please read my 
other email why your proposal won't work).

Your release management has announced that the testing release process 
was able to achive this if they drop two thirds of the Debian 
architectures.

I'd say the pre-testing was able to achieve this with a dozen 
architectures.


 The issue has always been one of how many RC bugs are acceptable in the
 release and this has always been at the discretion of the release
 manager.


This number has never been highe than zero [1].


  You say you've deployed Debian sarge and sid in server environments 
  (even sarge, although months old security fixes might be missing???).
  
  Let me ask some questions:
  - How many thousand people can't continue working if the server isn't
available?
 For comparative purposes, I have worked as systems/network/admin where
 the number has been as small as 50 and as large as 30,000.
 
  - How many million dollar does the customer lose every day the server is
not available?
 
 We measured it in millions of dollars per hour, not day.
 
  - How many days without this server does it take until the company is
bankrupt?
 
 We never got to that point, because it was simply not an option.


And critical machines for such a company were running a Debian testing 
or unstable???


  If the mail server of a small company isn't running for a few hours it's 
  not a problem - but there are also other environments.
 
 Since you seem to be trolling, I'll feed the troll:  If that small
 company relies on the email server to take orders from  customers, that
 few hour outage could translate into a large amount of money.  If that
 small company is not financially sound, that few hour outage may be the
 cause of that small business failing.  Large organizations are much


I am not trolling.

All I wanted to say is that there are not that much computer-bound small 
companies that can live with some outages.


 better equipped to weather a temporary outage (larger cash reserves,
 ability to implement backup  systems, etc).


An interesting problem about software bugs is, that they affect backup 
systems as well.


...
   Look at the third use case I explained above. For these users of Debian, 
   long-living releases where the _only_ changes are security fixes are 
   _very_ important.
   
   Again, I don't think you ever built a commercial product around Linux 
   based on your statements here. No offence if you have, maybe it's just 
   corporate culture differences between the EU and US?
  
  
  There are reasons why companies pay several thousand dollars licence 
  fees for every computer they run the enterprise version of some 
  distribution on. E.g. RedHat supports each version of their enterprice 
  edition for seven years. A few thousand dollars are _nothing_ compared 
  to the support costs and man months that have to be put into setting up 
  and testing the system.
 
 So it should be no problem for those companies who choose to run Debian
 to forward a small donation to Debian for all the thousands they save.
 Or maybe they should allow their staff to spend several hours a week
 getting paid to contribute to Debian.


AFAIK neither money nor human resources are a problem of Debian.

SPI already has more money than plans how to reasonable spand it, and if 
manpower was a problem in a project with nearly thousand official and 
trusted developers, this would be an organizational problem but not a 
problem you could solve by adding more people to the project - people 
discovered not less than thirty years ago that adding people to a 
project often _increases_ the time until the goal is reached.


 My point is Debian is NOT a corporate product.  If it is found to be
 useful by corporations that's great for them.  If the corporations want
 to run Debian, there are companies that offer similar support for Debian
 that RedHat and Novell offer for their respective distros.
 
 Since Debian is not a corporate product, Debian is free to investigate
 and try different strategies without worrying about the monetary impact
 of those changes in the same way a corporate distribution has to.  We
 can innovate because it makes sense, not because it is good for the
 bottom line.


Debian plans to drop two thirds of it's architectures from it's releases 
and Debian could also complete dump stable.

Noone forces Debian to continue stable releases, but you 

Re: Temporal Release Strategy

2005-04-22 Thread Patrick Ouellette
On Mon, Apr 18, 2005 at 04:24:34PM -0500, Adam M wrote:
 A similar thing is already here in http://snapshot.debian.net/

Similar only in that they have daily snapshots.  Vastly dissimilar in
that what is provided is the complete archive, bugs and all.

I'm not saying we call each day a release, but we allow stable to be 
updated from candidate daily and call it a release when a particular
event happens.  That event is to be defined outside the process of 
the archive evolution.

 
 You cannot do this with the archive. The current archive size is
 already too big for most mirrors to handle.
 

I don't believe this would add a significant amount of material to the
archive.  If the software in candidate is stable, the only time a
package is different between candidate and testing is during the three
month test period. Archive size needs to be addressed, and will most
likely continue to be a problem for some time to come.   

  You can still have this environment.  As long as your system looks at
  the Packages file from the release (and the security updates Packages
  file).
 
 see above link :)
 
  Testing does not remedy this problem.  If testing was virtually always
  production quality then there would be no need for the release manager
  to go through an elaborate freeze  bug fix cycle to get things in shape
  for a release.
 
 All you are proposing is another testing-like stage. Bugs would
 propagate there regardless. Bugs are part of stable as well.


Yes, bugs are part of each and every package.  The trick is in knowing
what bugs are present so you can deal with them.  The longer you test,
the greater the chance that a critical bug which requires an unlikely
sequence of events to trigger will be discovered.

   We should not destroy the notion of stable to get up-to-date packages.
  
  I'm not trying to destroy the notion of stable, I have a different
  definition of stable.  My definition of stable is software that does
  what it is designed to do without bugs, in the manner in which the
  designer and programmer intended.  I'm also trying to show that the
 
 Then your stable never existed. All software has bugs be it Linux or
 Windows based Software of any complexity without any bugs does not
 exist. For example, look at the number of bugs in emacs, yet, I would
 consider the software mature and relatively bug-free.
 

I would argue that it depends on which particular version of emacs you
are using as to if it should be called mature and relatively bug free.a
If you are pulling the latest CVS snapshot I would not call that mature
or bug free.  If you are using a version that has been released for some
time, then it is possible to consider it mature - the bug free part is
another story.  Mature != bug free.  Stable != bug free.  

Mature can mean feature rich or old and stable can mean unchanging
or of sufficient quality for the intended purpose.

  traditional concept of a release in Debian is outdated.  I will even go
  so far as to say the reason Debian has had exponentially longer release
  cycles is that the traditional concept of a release is flawed for a
  project the size and scope of Debian.  We need to adjust our thinking
  outside the traditional definitions.
 
 Why? Why is there RHEL 2.0, 3.0.. Why not just RHEL 2005-01-01,
 2005-01-02, etc..? The releases are there to provide interface
 stability. Everyone does this. What you are proposing is the time
 based snapshots which are already available on
 http://snapshot.debian.net/

I am proposing a progressive update to stable, so we can declare a
collection of packages (with their associated version numbers) a release
by a well defined rule.  Once a release is declared, you have what you
termed interface stability.  I am only proposing time based snapshots
in that you could, on any given day declare what was in the stable
Packages file to be a release, and it could contain different packages
than the previous release.

 
 Now, if you want to support snapshot of Debian with 36 month security,
 well, be my guest :) In the last 36-months, there were about 30
 uploads of Apache to unstable. Now, if only 15 such versions
 propagated to stable snapshots, then you find a remote hole, and
 suddenly you have to backport a security fix for 15 versions of
 Apache!

If there were 30 uploads of apache, over 36 months, there would not have
been any updates to the candidate package as none of the updates were
old enough.  This is he point of the 3 month time to discover bugs.

If a security fix is needed, we only are fixing the last few versions
that made it to stable - given your scenario at most two versions.

 
 Also, try providing an efficient stable security build daemons! The
 chroots would have to be rebuilt for each package.
 

Again, this need is addressed by the requirement that the package meet
the rules for promotion.  The challenge would be little different than
it is today.

  I think this proposal could actually enhance the stability of 

Re: Temporal Release Strategy

2005-04-22 Thread Adrian Bunk
On Fri, Apr 22, 2005 at 02:54:38PM -0400, Patrick Ouellette wrote:
 On Fri, Apr 22, 2005 at 07:16:30PM +0200, Adrian Bunk wrote:
  
  The problem is that for many transitions, slowly means never, since 
  the criteria you set are unlikely to be fulfilled for all parts of such 
  a transition at any time in the future.
  
  And the more time passes, it becomes more and more complicated since 
  additional transitions might be interdependent with a transition making 
  the problem even more complicated.
  
 
 You are very good at repeating this will never work.  You are
 essentially saying it is impossible for a package to have no RC bugs,
 and that those bugs are never going to be fixed fast enough to progress
 through the quality control system I proposed.  I have a bit more faith
 in my fellow Debian Developers than that.
 
 I admit that the candidate phase will change more slowly than testing -
 it is supposed to.  The stable (or whatever it is called - maybe
 production) section will change even more slowly.  This is by design.


Show me how my tiff transition example will work in your proposal, and 
you can prove me wrong...


...
   Testing, candidate and stable should change progressively slower.  That
   is the entire point.
  
  
  As I am trying to explain, the speed of changes to stable will sonn 
  become zero.
 
 The speed of changes to stable is currently zero.  Debian does not have
 to do anything to maintain that.  My proposal would at the very least
 change that from zero to glacially slow, with the option to pick a
 version that changes slow, fast, or continuously.
  
  
  If you believe your approach would work, please try the following:
  
  Take stable from today, and make this your candidate.
  Take testing from today.
 
 
 Actually, I am planning on working on that this weekend.  I was not
 going to start with the current stable, but with the current testing.  I
 will be building a candidate list by using my proposed rules (0 RC bugs,
 3 months or more in testing).
 
 I will build a new stable from the candidate list with those packages
 that have been in testing 6 or more months with 0 RC bugs.


Where do you get the information from how long a package is in testing?
Do you have 6 months of update_output, or is there a source I do not 
know about?


 It will be interesting to see how many required, base, standard, and
 optional packages meet the standard I propose.
...

Since even glibc in testing will not be in your candidate list, I can 
predict that your result set will be very small since you have to ensure 
that all dependencies and build dependencies are fulfillable...


 Pat

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-22 Thread luna
On Fri, 22 Apr 2005 13:58:31 -0400, Patrick Ouellette wrote:
 On Mon, Apr 18, 2005 at 04:24:34PM -0500, Adam M wrote:
 In many ways, current testing is your stable. Extending the testing
 period from testing to your proposed candidate and then stable would
 do nothing about normal bugs. RC bugs are usually found quite quickly
 by people using unstable.
 If RC bugs are found so quickly in unstable, why has there been no
 release in the last 3 or so years?  Testing is normally quite usable.
 That is part of the reason I believe this type of approach to releases
 would work.
Perhaps because RC bugs are not the only problem to resolve in order to 
release. Indeed, security build seems to be a biggest problem.

--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Jeff Carr
Adam M wrote:
Why? Why is there RHEL 2.0, 3.0.. Why not just RHEL 2005-01-01,
2005-01-02, etc..? 
Because redhat makes money selling releases.
 The releases are there to provide interface stability. Everyone does 
this.

Everyone being other distributions? I disagree. How many Fortune 500 
customers have you deployed debian for? interface stablility? Anyone 
that cares looks at packages that matter specifically if it's being 
deployed commercially.

It's much better for acceptance that you don't have to have 
conversations with managers because someone explains to them that you 
should be using redhat because you are using Debian unstable or 
Debian testing and it's *dangerous* and *unstable*. Get rid of these 
stupid symlinks; debian sid's been superior to fedora for years.

Now, if you want to support snapshot of Debian with 36 month security,
well, be my guest :) In the last 36-months, there were about 30
uploads of Apache to unstable. 
Excellent.
 Now, if only 15 such versions
propagated to stable snapshots, then you find a remote hole, and
suddenly you have to backport a security fix for 15 versions of
Apache!
What?
Isn't the process:
1) make a patch
2) give it to the apache developers
3) new packaged apache versions have the patch
4) patch makes it upstream
5) patch no longer needed in debian package
Also, try providing an efficient stable security build daemons! The
chroots would have to be rebuilt for each package.
? I guess I don't understand enough about how the build process works 
for the packages in debian but that sounds funny to me. Or I just don't 
understand what you mean.

I think this proposal could actually enhance the stability of Debian
(where stability is defined as lack of bugs, not software that never
changes except for security updates), as well as further enhance the
reputation Debian maintains in the community.
I totally agree with this  a temporal release strategy.
In many ways, current testing is your stable.
No kidding, so what the heck is the point of having a stable symlink to 
woody. The stable, testing and unstable symlinks should be removed. They 
are just being used as FUD by people against debian.

Extending the testing
period from testing to your proposed candidate and then stable would
do nothing about normals bugs. RC bugs are usually found quite quickly
by people using unstable.
Why not let people choose what they want to use woody sarge or sid 
and never change the names again. I think lots of people are happy with 
how things work now. No need to ever do a release again. Just remove the 
old/arcane symlinks. Almost everyone I know uses sid; I don't think 
anyone is going to switch to sarge once sid is out.

Jeff
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Adrian Bunk
On Wed, Apr 20, 2005 at 02:06:12PM -0700, Jeff Carr wrote:
 Adam M wrote:
 
 Why? Why is there RHEL 2.0, 3.0.. Why not just RHEL 2005-01-01,
 2005-01-02, etc..? 
 
 Because redhat makes money selling releases.
 
  The releases are there to provide interface stability. Everyone does 
 this.
 
 Everyone being other distributions? I disagree. How many Fortune 500 
 customers have you deployed debian for? interface stablility? Anyone 
 that cares looks at packages that matter specifically if it's being 
 deployed commercially.
 
 It's much better for acceptance that you don't have to have 
 conversations with managers because someone explains to them that you 
 should be using redhat because you are using Debian unstable or 
 Debian testing and it's *dangerous* and *unstable*. Get rid of these 
 stupid symlinks; debian sid's been superior to fedora for years.


There are at least three different comparisons:


Debian sid is comparable to e.g. RedHat Fedora or Gentoo (which of these 
three is best is a different discussion).

Debian sid is for experienced computer users who always want the latest 
software and who can live with a bug here or there.


Debian stable is comparable to personal editions of other distributions 
like e.g. SuSE Professional.

These distributions are for users with few experience who simply want a 
running system. Debian is a bit behind in terms of being up-to-date and 
of userfriendlyness, but it's far superior in it's stability.


Debian stable is comparable to the enterprise products of e.g. RedHat or 
SuSE.

These distributions are usually installed on servers that are installed 
and intensively tested once. Security fixes are a must but mustn't cause 
any breakages. Updates to new upstream versions which might break 
something 


Note that you can't cover the last use case without a long-living and 
non-changing stable.


 Now, if you want to support snapshot of Debian with 36 month security,
 well, be my guest :) In the last 36-months, there were about 30
 uploads of Apache to unstable. 
 
 Excellent.
 
  Now, if only 15 such versions
 propagated to stable snapshots, then you find a remote hole, and
 suddenly you have to backport a security fix for 15 versions of
 Apache!
 
 What?
 
 Isn't the process:
 
 1) make a patch
 2) give it to the apache developers
 3) new packaged apache versions have the patch
 4) patch makes it upstream
 5) patch no longer needed in debian package


Look at the third use case I explained above. For these users of Debian, 
long-living releases where the _only_ changes are security fixes are 
_very_ important.


...
 In many ways, current testing is your stable.
 
 No kidding, so what the heck is the point of having a stable symlink to 
 woody. The stable, testing and unstable symlinks should be removed. They 
 are just being used as FUD by people against debian.


They are not (see above).



 Extending the testing period from testing to your proposed candidate 
 and then stable would do nothing about normals bugs. RC bugs are 
 usually found quite quickly by people using unstable.
 
 Why not let people choose what they want to use woody sarge or sid 
 and never change the names again. I think lots of people are happy with 
 how things work now. No need to ever do a release again. Just remove the 
 old/arcane symlinks. Almost everyone I know uses sid; I don't think 
 anyone is going to switch to sarge once sid is out.


See above.


 Jeff

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-20 Thread Jeff Carr
Adrian Bunk wrote:
There are at least three different comparisons:
Debian sid is comparable to e.g. RedHat Fedora or Gentoo (which of these 
three is best is a different discussion).

Debian sid is for experienced computer users who always want the latest 
software and who can live with a bug here or there.
I think you nailed that perfectly. This should be the text on the debian 
website that describes the release and points people to a CD image to 
install it.

Debian stable is comparable to personal editions of other distributions 
like e.g. SuSE Professional.

These distributions are for users with few experience who simply want a 
running system. Debian is a bit behind in terms of being up-to-date and 
of userfriendlyness, but it's far superior in it's stability.
OK, I like this also, but s/stable/sarge/ and that would be perfect. No 
reason to alias stable to sarge. Just use the name and be done with it.

Debian stable is comparable to the enterprise products of e.g. RedHat or 
SuSE.

These distributions are usually installed on servers that are installed 
and intensively tested once. Security fixes are a must but mustn't cause 
any breakages. Updates to new upstream versions which might break 
something 
Well, that is wishful thinking, but I've deployed debian sid against RH 
enterprise and commercial dists. Sometimes sid, sometimes sarge. It 
really depends on the customer and the competance of their staff.

In any case, you are thinking wishfully here and I'm not sure you have 
deployed debian to large clients. The primary problem is the poor 
impression that:

woody == stable == old
sarge/sid == testing/unstable == broken == pain == my servers crash
Note that you can't cover the last use case without a long-living and 
non-changing stable.
I think the debian community would be better served if never again the 
words stable were tied to a particular release.

How can you really say woody is any more stable than sid anyway? There 
are things so broken in the old versions of packages in woody that they 
can not be used anymore in a modern enviornment. Sure, it might be 
stable in the sense that it doesn't crash, but useless vs stable is 
undesirable. Having woody == stable is giving the false impression to 
people that don't know better that:

debian stable == old == obsolete == something is wrong with this picture
It just makes it hard to build confidence with decision makers that 
sid/sarge is safe to use over RHEL.

Look at the third use case I explained above. For these users of Debian, 
long-living releases where the _only_ changes are security fixes are 
_very_ important.
Again, I don't think you ever built a commercial product around Linux 
based on your statements here. No offence if you have, maybe it's just 
corporate culture differences between the EU and US?

No kidding, so what the heck is the point of having a stable symlink to 
woody. The stable, testing and unstable symlinks should be removed. They 
are just being used as FUD by people against debian.
They are not (see above).
I think I explained poorly what I meant by FUD. What I meant was that 
people that want other distributions to be used, use the FUD that sarge 
is dangerous and the only stable version of debian is ancient and 
too old to use.

Enjoy,
Jeff
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Adam M
 Isn't the process:
 
 1) make a patch
 2) give it to the apache developers
 3) new packaged apache versions have the patch
 4) patch makes it upstream
 5) patch no longer needed in debian package

You know, there are security updates for stable releases. You have to
patch those. If there are 15 versions of Apache in various stable
releases, that means 15 packages to apply patches to. Well, let's just
say it is less then realistic.

  Also, try providing an efficient stable security build daemons! The
  chroots would have to be rebuilt for each package.
 
 ? I guess I don't understand enough about how the build process works
 for the packages in debian but that sounds funny to me. Or I just don't
 understand what you mean.

To build security patches, you need the same libraries, compilers,
etc... for the release so the built package has the same ABI.

  In many ways, current testing is your stable.
 
 No kidding, so what the heck is the point of having a stable symlink to
 woody. The stable, testing and unstable symlinks should be removed. They
 are just being used as FUD by people against debian.

Yes, it is mostly FUD. You can change the symlinks to stale,
staging, bleedingedge or whatever. For more information of what is
in Debian, and especially its FTP structures, see

http://www.debian.org/doc/manuals/reference/ch-system.en.html#s-dists

Also, I think you replied to this without even reading the thread. I
said his definition of stable, not current stable.

  Extending the testing
  period from testing to your proposed candidate and then stable would
  do nothing about normals bugs. RC bugs are usually found quite quickly
  by people using unstable.
 
 Why not let people choose what they want to use woody sarge or sid
 and never change the names again. I think lots of people are happy with
 how things work now. No need to ever do a release again. Just remove the
 old/arcane symlinks. Almost everyone I know uses sid; I don't think
 anyone is going to switch to sarge once sid is out.

These names will never change. You can still use Slink or Hamm if you want.

Oh, and Sid will never release because it is always unstable. See
above link for more details.

Also, your the last statement shows your lack of understanding of the
release process. Instigating needless flamewars, like you just did, is
probably the main reason why Ubuntu was created. So, if you are going
to rant about something you don't know about (and don't care about
since you use Sid, not Sarge), take a break and then think if it's
worth it.

- Adam



Re: Temporal Release Strategy

2005-04-20 Thread Adrian Bunk
On Wed, Apr 20, 2005 at 03:18:52PM -0700, Jeff Carr wrote:
 Adrian Bunk wrote:
...
 Debian stable is comparable to the enterprise products of e.g. RedHat or 
 SuSE.
 
 These distributions are usually installed on servers that are installed 
 and intensively tested once. Security fixes are a must but mustn't cause 
 any breakages. Updates to new upstream versions which might break 
 something 
 
 Well, that is wishful thinking, but I've deployed debian sid against RH 
 enterprise and commercial dists. Sometimes sid, sometimes sarge. It 
 really depends on the customer and the competance of their staff.
 
 In any case, you are thinking wishfully here and I'm not sure you have 
 deployed debian to large clients. The primary problem is the poor 
 impression that:
 
 woody == stable == old
 sarge/sid == testing/unstable == broken == pain == my servers crash
 
 Note that you can't cover the last use case without a long-living and 
 non-changing stable.
 
 I think the debian community would be better served if never again the 
 words stable were tied to a particular release.
 
 How can you really say woody is any more stable than sid anyway? There 
 are things so broken in the old versions of packages in woody that they 
 can not be used anymore in a modern enviornment. Sure, it might be 
 stable in the sense that it doesn't crash, but useless vs stable is 
 undesirable. Having woody == stable is giving the false impression to 
 people that don't know better that:
 
 debian stable == old == obsolete == something is wrong with this picture
 
 It just makes it hard to build confidence with decision makers that 
 sid/sarge is safe to use over RHEL.
...


Let my try to explain it:


The debian stable == obsolete is a release management problem of 
Debian. One release every year and it would be suitable for most 
purposes.


You say you've deployed Debian sarge and sid in server environments 
(even sarge, although months old security fixes might be missing???).

Let me ask some questions:
- How many thousand people can't continue working if the server isn't
  available?
- How many million dollar does the customer lose every day the server is
  not available?
- How many days without this server does it take until the company is
  bankrupt?


If the mail server of a small company isn't running for a few hours it's 
not a problem - but there are also other environments.


Regarding things broken in woody:

In many environments, the important number is not the total number of 
bugs but the number of regressions. Doing intensive tests once when you 
install/upgrade the machine is acceptable, but requiring this every 
month because it's required for the security updates that bring new 
upstream releases is not acceptable.


 Look at the third use case I explained above. For these users of Debian, 
 long-living releases where the _only_ changes are security fixes are 
 _very_ important.
 
 Again, I don't think you ever built a commercial product around Linux 
 based on your statements here. No offence if you have, maybe it's just 
 corporate culture differences between the EU and US?


There are reasons why companies pay several thousand dollars licence 
fees for every computer they run the enterprise version of some 
distribution on. E.g. RedHat supports each version of their enterprice 
edition for seven years. A few thousand dollars are _nothing_ compared 
to the support costs and man months that have to be put into setting up 
and testing the system.


And I doubt these are only corporate culture differences between the EU 
and US:

How many days does it take in the US until a bank is bankrupt after a 
critical part of their computer infrastructure is broken?


 No kidding, so what the heck is the point of having a stable symlink to 
 woody. The stable, testing and unstable symlinks should be removed. They 
 are just being used as FUD by people against debian.
 
 They are not (see above).
 
 I think I explained poorly what I meant by FUD. What I meant was that 
 people that want other distributions to be used, use the FUD that sarge 
 is dangerous and the only stable version of debian is ancient and 
 too old to use.


I'd say both is not FUD - it's true.

Debian stable is ancient - but that's something you have to ask the 
Debian release management about. If the officially announced release 
date for sarge is now missed by more than one and a half years this is 
the issue where investigation should take place.

Regarding sarge:

I do personally know people who had serious mail loss due to #220983. At 
the time I reported this bug, it was present in sarge. This problem 
couldn't have happened in a Debian stable (because it would have been 
discovered before the release would have been declared stable). This 
kind of problems that can occur every day in sarge _are_ dangerous 
problems.


 Enjoy,
 Jeff

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been 

Re: Temporal Release Strategy

2005-04-20 Thread Jeff Carr
Adam M wrote:
? I guess I don't understand enough about how the build process works
for the packages in debian but that sounds funny to me. Or I just don't
understand what you mean.

To build security patches, you need the same libraries, compilers,
etc... for the release so the built package has the same ABI.
Surely. I just thought there could be only one version of a package in 
the Packages.gz file. I didn't think each older package that might be in 
main/a/apache/ would be rebult with the enviornment that it was 
originally built in. If I understand you correctly and that is what 
happens, then I see that would be computing intensive.

Yes, it is mostly FUD. You can change the symlinks to ...
Well, I can't really change them; I was more just giving my point of 
view as a happy debian user.

Also, I think you replied to this without even reading the thread. I
said his definition of stable, not current stable.
Sorry you are right, I didn't notice this distinction.
These names will never change. You can still use Slink or Hamm if you want.
Oh, and Sid will never release because it is always unstable. See
above link for more details.
Yes, I knew that, but tend to forget. It's been years since the last 
release.

Also, your the last statement shows your lack of understanding of the
release process. Instigating needless flamewars, like you just did, is
Sorry; wasn't trying to do that. Just passing on results of working in a 
corporate enviornment and the kinds of complaints that have been used 
against debian deployment.

Enjoy,
Jeff
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Miquel van Smoorenburg
In article [EMAIL PROTECTED],
Jeff Carr  [EMAIL PROTECTED] wrote:
Why not let people choose what they want to use woody sarge or sid 
and never change the names again. I think lots of people are happy with 
how things work now. No need to ever do a release again. Just remove the 
old/arcane symlinks. Almost everyone I know uses sid; I don't think 
anyone is going to switch to sarge once sid is out.

If almost everyone you know is a desktop user, then I can see your
point. But no-one sane running production server systems is going
to run sid.

Sid aka unstable on a production system means either updating
your production system every few days to keep up (sorry customer,
we switched to php 6.8 with postgres 17, rewrite your apps and
fix your sql or sorry boss, we switched to php 6.8 with postgres
17, the forced rewrite of the production system means the plant will be
down for 3 weeks) or you don't upgrade unless needed for security
reasons and at that point you have the same problem but then
for 300 packages at the same time.

That's why you need a stable supported release. No surprises
but still security patches.

I also think that running debian unstable-only will mean debian
will get even less focused. Why update your packages, there's
not going to be a release ever anyway. If we're not at that
point already.

Mike.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-20 Thread Jeff Carr
Adrian Bunk wrote:
Let me ask some questions:
- How many thousand people can't continue working if the server isn't
  available?
- How many million dollar does the customer lose every day the server is
  not available?
- How many days without this server does it take until the company is
  bankrupt?
These are interesting questions, but not really applicable. I've never 
seen a corporate enviornment where an upstream or outside distribution 
is deployed without being tested internally first. I don't think it's 
something that should be taken into account in the release process. 
Companies have internal methods for deployment that double check and 
verify a distribution before it is used.

There are reasons why companies pay several thousand dollars licence 
fees for every computer they run the enterprise version of some 
distribution on. E.g. RedHat supports each version of their enterprice 
edition for seven years.
I didn't know they had pledged to do that. Interesting.
How many days does it take in the US until a bank is bankrupt after a 
critical part of their computer infrastructure is broken?
I don't know. Maybe we should run a test :)
Jeff
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Russ Allbery
Adrian Bunk [EMAIL PROTECTED] writes:

 You say you've deployed Debian sarge and sid in server environments 
 (even sarge, although months old security fixes might be missing???).

Sure.  Frankly, sarge has better security support than we ever got from
Sun for commercial versions of Solaris.  Don't run the things that aren't
secure, pay attention to advisories, and be willing to grab something from
sid in the case of dire emergencies, and sarge provides a perfectly
acceptable security profile.  Servers generally expose very few things to
the network and one rarely cares about local exploits.

Now, Debian stable is far *better* on security, and in fact I would say
that Debian stable has better security support than any other operating
system I've ever seen.  I would *prefer* to have Debian stable's level of
security support for servers.  But if I have to have Apache 2.x or some
other package that just isn't easily available for stable, going with
sarge rather than backports is a reasonable decision and one that I'm
quite comfortable with.

Really, the worry about using sarge in production is not the security
support, it's the fact that things keep changing all the time and in ways
that may introduce bugs.  The stability and the lack of change in anything
other than security are the important bits for stable for me, and what I'm
currently really missing in an environment where I'm mostly running sarge
(mostly because we need Apache 2.x, partly because we also need a newer
OpenLDAP).

 Regarding sarge:

 I do personally know people who had serious mail loss due to #220983. At
 the time I reported this bug, it was present in sarge. This problem
 couldn't have happened in a Debian stable (because it would have been
 discovered before the release would have been declared stable). This
 kind of problems that can occur every day in sarge _are_ dangerous
 problems.

Yeah, this is more the thing that I'd worry about when running sarge on a
server.

-- 
Russ Allbery ([EMAIL PROTECTED]) http://www.eyrie.org/~eagle/


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-20 Thread Jeff Carr
Miquel van Smoorenburg wrote:
If almost everyone you know is a desktop user, 
Most everyone I know is an engineer :)
then I can see your
point. But no-one sane running production server systems is going
to run sid.
Well, I'd say no-one sane is running an unqualified/untested 
distribution. It doesn't matter when you get it from.

Sid aka unstable on a production system means either updating
your production system every few days to keep up (sorry customer,
we switched to php 6.8 with postgres 17, rewrite your apps and
fix your sql or sorry boss, we switched to php 6.8 with postgres
17, the forced rewrite of the production system means the plant will be
down for 3 weeks)
Yes, all these would clearly be stupid things to do :)
I don't blindly deploy distributions to clients in any case; be it 
commercially released ones or snapshots of other once. (well they are 
all really snapshots anyway).

you don't upgrade unless needed for security
reasons and at that point you have the same problem but then
for 300 packages at the same time.
security is another thing altogether assessed normally via other 
considerations and needs.

That's why you need a stable supported release. No surprises
but still security patches.
I wasn't trying to suggest that the releases should be unstable or 
insecure. But these terms are relative and come at a cost don't they?

Maybe it's just better to have sarge sid, etc named releases with 
more complicated descriptions of the intent of the releases like Adrian 
Bunk wrote so well a few emails back?

I also think that running debian unstable-only will mean debian
will get even less focused. 
Ok, I'm not knowledgeable enough to understand all the issues. I just 
wanted to send encouraging words and feedback to the developers. debian 
(sid specifically) has, and I hope continues to be, spectacularly well 
done over the last 5 years. I really think it's the best distribution 
out there.

Why update your packages, there's not going to be a release ever anyway. 
 If we're not at that point already.
I'm not sure I'm fully understanding what I sense is frustration with my 
comments/feedback/suggestions. I'm really just trying to complement 
everyone involved; mention that sarge and sid are really grand; and pass 
back the one bad think is that new users get worried when they hear 
testing and unstable. These are clearly not accurate; I can 
confidently say that sarge  sid are as untested and unstable as 
mandrake or fedora over the last 5 years.

I merely am suggesting that the bar has been raised so high; the 
standard and expectations set at such a lofty level that the general 
public might be better served by a more detailed explanation of the 
releases and the dangers. Again, like the text Adrian wrote a few emails 
back I think is perfect and might be better than calling them simply 
stable, testing and unstable.

Warm regards,
Jeff
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]


Re: Temporal Release Strategy

2005-04-20 Thread Adrian Bunk
On Wed, Apr 20, 2005 at 04:23:02PM -0700, Jeff Carr wrote:
 Adrian Bunk wrote:
 
 Let me ask some questions:
 - How many thousand people can't continue working if the server isn't
   available?
 - How many million dollar does the customer lose every day the server is
   not available?
 - How many days without this server does it take until the company is
   bankrupt?
 
 These are interesting questions, but not really applicable. I've never 
 seen a corporate enviornment where an upstream or outside distribution 
 is deployed without being tested internally first. I don't think it's 
 something that should be taken into account in the release process. 
 Companies have internal methods for deployment that double check and 
 verify a distribution before it is used.
...


Yes, such companies do test all changes. But being sure that it's _very_ 
unlikely that a security update breaks something makes life much easier.


And then there's the class of problems you could recently observe with
PHP 4.3.10:

PHP 4.3.10 fixed more than half a dozen known security problems, but it 
also contained a performance regression letting some scripts run slower 
by a factor of more than 50 (sic).

If your distribution gives you PHP 4.3.10 to fix the security problems 
and you use PHP4 on a busy server you have a big problem in such a 
situation.


 Jeff


cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-20 Thread Adam M
On 4/20/05, Jeff Carr [EMAIL PROTECTED] wrote:
 Adam M wrote:
 
 ? I guess I don't understand enough about how the build process works
 for the packages in debian but that sounds funny to me. Or I just don't
 understand what you mean.
 
 
  To build security patches, you need the same libraries, compilers,
  etc... for the release so the built package has the same ABI.
 
 Surely. I just thought there could be only one version of a package in
 the Packages.gz file. I didn't think each older package that might be in
 main/a/apache/ would be rebult with the enviornment that it was
 originally built in. If I understand you correctly and that is what
 happens, then I see that would be computing intensive.

Well, this is one problem with having automatic releases like this.
There are much bigger problems though, like mirror space. The
Vancouver proposal trying to address this. If you have a package with
versions A for arch A', then maintainer uploads package version B but
arch A' can't keep up building it, then the mirrors must have both,
versions A and B of the source of the package. Vancouver proposal was
trying to move some less popular and/or obsolete arches from the main
mirror network to a voluntary one (ie. not trying to kill the ports or
something).

As you can see, having many overlapping releases like the Temporal
Release Strategy would kill the current mirror network.

  Yes, it is mostly FUD. You can change the symlinks to ...
 
 Well, I can't really change them; I was more just giving my point of
 view as a happy debian user.
...
 Sorry; wasn't trying to do that. Just passing on results of working in a
 corporate enviornment and the kinds of complaints that have been used
 against debian deployment.

I would suggest instead of saying you are installing testing, just
tell them you are installing Debian Sarge, or Debian 3.1 and set up
/etc/apt/sources.list to refer to sarge in place of
stable/testing/unstable. There is no use telling people they are
running unstable or testing if they don't know what it means in
the first place (like telling people about building a nuclear power
plant instead of a coal power plant - they'll rise up in protest
without realising that coal power plants produce more radiation than a
nuclear power plant).

- Adam



Re: Temporal Release Strategy

2005-04-19 Thread Adrian Bunk
On Fri, Apr 15, 2005 at 04:45:17PM -0400, Patrick Ouellette wrote:
 On Thu, Apr 14, 2005 at 11:59:52PM +0200, Adrian Bunk wrote:
  
  On Wed, Apr 13, 2005 at 10:12:31AM -0400, Patrick A. Ouellette wrote:
  ...
   The progression I see is:
   
   unstable - testing - candidate - stable
   
   The existing rules for promotion from unstable to testing continue to be
   used.
   
   Promotion from testing to candidate requires meeting the same rules as
   promotion from unstable to testing with the following exceptions:
   packages must be in testing for at least 3 months, and have no release
   critical bugs.
  ...
  
  One big problem testing has are transitions. This includes library 
  transitions, but also other transitions like e.g. an ocaml transition 
  affecting several dozen packages currently waiting to enter testing.
  
  Many transitions require a serious amount of manual coordination since 
  all packages have to be ready to go into testing _at the same time_.
  
  Please explain how you think any bigger transition can ever enter your 
  candidate if you add to the testing criteria a 3 months criteria all 
  affected packages have to fulfill at the same time?
  
 
 The system should always be considered a FIFO system.  There are only 2
 places packages can enter the system: unstable, and security-updates.
 The coordination of dependent packages will always require manual
 coordination.  There is no way around it (unless you completely automate
 the build process so it downloads the upstream tar ball and packages it
 for Debian - and never breaks).  The purpose of unstable is to allow
 those problems to be worked out.  Once the group of interdependent
 packages is ready (managed to live in unstable for 10 days without a
 release critical bug) then they will all meet the criteria set to be
 promoted to testing.  The same thing happens again.  Once the entire
 group satisfies the conditions, the entire group migrates to candidate.
 The point of having the promotion conditions is to make sure the system
 is not broken, and can handle library or interdependent package version
 changes.  The rules I referred to are found here:
 http://www.debian.org/devel/testing

The rules and goals of testing are clear.

The more interesting points are the problems of testing that several 
years of using it have shown.

 If package FOO has a RC bug, then everything that depends on FOO will be
 stuck AT WHATEVER POINT IT IS IN THE PROCESS until FOO is fixed.  If
 fixing FOO breaks BAR, then they all wait again until BAR is fixed.  Use
 of experimental to work through some of these issues would help.
 I'm not saying it won't take manual coordination to handle complex
 changes to the system.  I'm not saying it will make anyone's life
 easier.  What my proposal will do is provide the ability to decide when
 package $PACKAGE makes it into stable, we will call that an official
 release and give it a number.  Alternatively, you could declare every
 $INTERVAL Debian releases.  What is in stable should have been well
 tested, and supportable.  Stable no longer is a static concept, but a
 slowly evolving thing.  If you cannot wrap your mind around to accepting
 a stable that evolves, we could snapshot stable at release data and make
 a separate archive (really a Packages.gz and related files as long as
 the version of the package in the release exists in the package pool).

You completely miss my point:

There are several transitions every month, and a big transition can 
involve several hundred packets.

Your proposal requires, that _every single_ package that is part of a 
transition has to be both ready and in testing for over 3 months before 
it can enter your proposed candidate.

If _one_ of the packages that is part of a transition is updated in 
testing during this time, the 3 months start again. For bigger 
transitions, it's therefore practically impossible that they will be 
able to enter your candidate.

Please try to understand the limitations of testing before proposing 
something even stricter.

 Pat

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-18 Thread Adam M
On 4/16/05, Patrick Ouellette [EMAIL PROTECTED] wrote:
 On Fri, 2005-04-15 at 21:48 -0500, Adam M. wrote:
 
  Unfortunately this totally changes the purpose of stable. Stable is
 
 Yes and no.  It changes the concept of stable in that stable evolves.
 You still have the static release as long as we decide to keep that
 version of all the packages in the package pools.  The implementation of
 package pools created a virtual release environment where the release in
 the archives is only defined by the contents of the Packages file at the
 time of release.

A similar thing is already here in http://snapshot.debian.net/

You cannot do this with the archive. The current archive size is
already too big for most mirrors to handle.

 You can still have this environment.  As long as your system looks at
 the Packages file from the release (and the security updates Packages
 file).

see above link :)

 Testing does not remedy this problem.  If testing was virtually always
 production quality then there would be no need for the release manager
 to go through an elaborate freeze  bug fix cycle to get things in shape
 for a release.

All you are proposing is another testing-like stage. Bugs would
propagate there regardless. Bugs are part of stable as well.

  We should not destroy the notion of stable to get up-to-date packages.
 
 I'm not trying to destroy the notion of stable, I have a different
 definition of stable.  My definition of stable is software that does
 what it is designed to do without bugs, in the manner in which the
 designer and programmer intended.  I'm also trying to show that the

Then your stable never existed. All software has bugs be it Linux or
Windows based Software of any complexity without any bugs does not
exist. For example, look at the number of bugs in emacs, yet, I would
consider the software mature and relatively bug-free.

 traditional concept of a release in Debian is outdated.  I will even go
 so far as to say the reason Debian has had exponentially longer release
 cycles is that the traditional concept of a release is flawed for a
 project the size and scope of Debian.  We need to adjust our thinking
 outside the traditional definitions.

Why? Why is there RHEL 2.0, 3.0.. Why not just RHEL 2005-01-01,
2005-01-02, etc..? The releases are there to provide interface
stability. Everyone does this. What you are proposing is the time
based snapshots which are already available on
http://snapshot.debian.net/

Now, if you want to support snapshot of Debian with 36 month security,
well, be my guest :) In the last 36-months, there were about 30
uploads of Apache to unstable. Now, if only 15 such versions
propagated to stable snapshots, then you find a remote hole, and
suddenly you have to backport a security fix for 15 versions of
Apache!

Also, try providing an efficient stable security build daemons! The
chroots would have to be rebuilt for each package.

 I think this proposal could actually enhance the stability of Debian
 (where stability is defined as lack of bugs, not software that never
 changes except for security updates), as well as further enhance the
 reputation Debian maintains in the community.

In many ways, current testing is your stable. Extending the testing
period from testing to your proposed candidate and then stable would
do nothing about normals bugs. RC bugs are usually found quite quickly
by people using unstable.

- Adam



Re: Temporal Release Strategy

2005-04-18 Thread Gunnar Wolf
Patrick Ouellette dijo [Sat, Apr 16, 2005 at 01:04:59AM -0400]:
 (...)
 Another difference is that testing will get new versions of packages and
 those versions might (but should not) cause breakage.  Testing has had
 breakage issues in the past.  Ten days is not enough time to catch all
 the possible interactions (or even the majority of them).  I'm also not
 naive enough to think that my proposed candidate step will never cause
 breakage.  The purpose of the additional step is to have a place where
 things change slower than testing to catch more of the obscure bugs that
 only become apparent with more time.  By requiring there be 0 RC bugs to
 progress from testing to candidate and candidate to stable we
 cause stable to change when the software really stabalizes, not at an
 arbitrary time selected by the release team. 

Umh... And... Well, if a RC bug is found in candidate, will it take (a
very minimum of) one month for the fix to get there? 

Don't you think that, during the release cycle (and specially during
its first phase after a release) we will always have one RC bug
keeping a second one from getting fixed?

Greetings,

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-18 Thread Josh Lauricha
On Mon 04/18/05 16:24, Adam M wrote:
 Also, try providing an efficient stable security build daemons! The
 chroots would have to be rebuilt for each package.

Just a thought, wouldn't this be done quit efficiently with unionfs?
Just install a minimalist root on one partition (or loopback). Then
throw another partition/loopback over it. Then just delete the second
partition every new install.

Poof, all changes gone.

-- 

--
| Josh Lauricha| Ford, you're turning|
| [EMAIL PROTECTED] | into a penguin. Stop|
| Bioinformatics, UCR  | it  |
||
| OpenPG:|
|  4E7D 0FC0 DB6C E91D 4D7B C7F3 9BE9 8740 E4DC 6184 |
||
| Geek Code: Version 3.12|
| GAT/CS$/IT$ d+ s-: a- C$ UL$ P++ L|
| $E--- W+ N o? K? w--(---) O? M+(++) V? PS++ PE-(--)|
| Y+ PGP+++ t--- 5+++ X+ R tv DI++ D--- G++  |
| e++ h- r++ z?  |
||


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-15 Thread Patrick Ouellette
On Thu, Apr 14, 2005 at 11:59:52PM +0200, Adrian Bunk wrote:
 
 On Wed, Apr 13, 2005 at 10:12:31AM -0400, Patrick A. Ouellette wrote:
 ...
  The progression I see is:
  
  unstable - testing - candidate - stable
  
  The existing rules for promotion from unstable to testing continue to be
  used.
  
  Promotion from testing to candidate requires meeting the same rules as
  promotion from unstable to testing with the following exceptions:
  packages must be in testing for at least 3 months, and have no release
  critical bugs.
 ...
 
 One big problem testing has are transitions. This includes library 
 transitions, but also other transitions like e.g. an ocaml transition 
 affecting several dozen packages currently waiting to enter testing.
 
 Many transitions require a serious amount of manual coordination since 
 all packages have to be ready to go into testing _at the same time_.
 
 Please explain how you think any bigger transition can ever enter your 
 candidate if you add to the testing criteria a 3 months criteria all 
 affected packages have to fulfill at the same time?
 

The system should always be considered a FIFO system.  There are only 2
places packages can enter the system: unstable, and security-updates.
The coordination of dependent packages will always require manual
coordination.  There is no way around it (unless you completely automate
the build process so it downloads the upstream tar ball and packages it
for Debian - and never breaks).  The purpose of unstable is to allow
those problems to be worked out.  Once the group of interdependent
packages is ready (managed to live in unstable for 10 days without a
release critical bug) then they will all meet the criteria set to be
promoted to testing.  The same thing happens again.  Once the entire
group satisfies the conditions, the entire group migrates to candidate.
The point of having the promotion conditions is to make sure the system
is not broken, and can handle library or interdependent package version
changes.  The rules I referred to are found here:
http://www.debian.org/devel/testing

If package FOO has a RC bug, then everything that depends on FOO will be
stuck AT WHATEVER POINT IT IS IN THE PROCESS until FOO is fixed.  If
fixing FOO breaks BAR, then they all wait again until BAR is fixed.  Use
of experimental to work through some of these issues would help.
I'm not saying it won't take manual coordination to handle complex
changes to the system.  I'm not saying it will make anyone's life
easier.  What my proposal will do is provide the ability to decide when
package $PACKAGE makes it into stable, we will call that an official
release and give it a number.  Alternatively, you could declare every
$INTERVAL Debian releases.  What is in stable should have been well
tested, and supportable.  Stable no longer is a static concept, but a
slowly evolving thing.  If you cannot wrap your mind around to accepting
a stable that evolves, we could snapshot stable at release data and make
a separate archive (really a Packages.gz and related files as long as
the version of the package in the release exists in the package pool).

Pat
-- 

Patrick Ouellette
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Amateur Radio: KB8PYM 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-15 Thread Adam M.
Patrick A. Ouellette wrote:

The progression I see is:

unstable - testing - candidate - stable
  


Unfortunately this totally changes the purpose of stable. Stable is
there not to provide bug free, up-to-date software releases. Stable is
to provide environmental stability. When someone installs package X from
stable, it is guaranteed that this package will remain at version X
though all security updates, etc.. It will remain as is, bugs and all.

This has a few disadvantages and advantages. The main advantages include,

* less time spent on maintaining your production machines - once you set
them up, no need to change the configs.
* ability to maintain 1000s of installations by one person - installing
a new machine can be as simple as `dd` the partition.
* security fixes do not break your system (3rd party applications or
otherwise)

The main disadvantage of this is that stable becomes stale.

The current testing is a remedies for this problem. Up-to-date packages
are provided in testing where the packages are virtually always
production quality. The main disadvantage of testing is lack of
environmental stability seen in stable.


The only difference between the support of testing vs. stable in Debian
is security support. If we have volunteers for the security team for
testing (for Etch), then I'm certain Debian can have two release modes,

stable - environmental stability implying stale packages
testing - up-to-date packages implying more work by admins

So, if we get testing-security working, then we will essentially have
two releases.

We should not destroy the notion of stable to get up-to-date packages.

- Adam


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-15 Thread Patrick Ouellette
On Fri, 2005-04-15 at 21:48 -0500, Adam M. wrote:

 Unfortunately this totally changes the purpose of stable. Stable is

Yes and no.  It changes the concept of stable in that stable evolves.
You still have the static release as long as we decide to keep that
version of all the packages in the package pools.  The implementation of
package pools created a virtual release environment where the release in
the archives is only defined by the contents of the Packages file at the
time of release.

 This has a few disadvantages and advantages. The main advantages include,
 
 * less time spent on maintaining your production machines - once you set
 them up, no need to change the configs.
 * ability to maintain 1000s of installations by one person - installing
 a new machine can be as simple as `dd` the partition.
 * security fixes do not break your system (3rd party applications or
 otherwise)
 

You can still have this environment.  As long as your system looks at
the Packages file from the release (and the security updates Packages
file).

 The main disadvantage of this is that stable becomes stale.
 
 The current testing is a remedies for this problem. Up-to-date packages
 are provided in testing where the packages are virtually always
 production quality. The main disadvantage of testing is lack of
 environmental stability seen in stable.
 

Testing does not remedy this problem.  If testing was virtually always
production quality then there would be no need for the release manager
to go through an elaborate freeze  bug fix cycle to get things in shape
for a release.

 
 The only difference between the support of testing vs. stable in Debian
 is security support. If we have volunteers for the security team for
 testing (for Etch), then I'm certain Debian can have two release modes,
 

Another difference is that testing will get new versions of packages and
those versions might (but should not) cause breakage.  Testing has had
breakage issues in the past.  Ten days is not enough time to catch all
the possible interactions (or even the majority of them).  I'm also not
naive enough to think that my proposed candidate step will never cause
breakage.  The purpose of the additional step is to have a place where
things change slower than testing to catch more of the obscure bugs that
only become apparent with more time.  By requiring there be 0 RC bugs to
progress from testing to candidate and candidate to stable we
cause stable to change when the software really stabalizes, not at an
arbitrary time selected by the release team. 

 We should not destroy the notion of stable to get up-to-date packages.

I'm not trying to destroy the notion of stable, I have a different
definition of stable.  My definition of stable is software that does
what it is designed to do without bugs, in the manner in which the
designer and programmer intended.  I'm also trying to show that the
traditional concept of a release in Debian is outdated.  I will even go
so far as to say the reason Debian has had exponentially longer release
cycles is that the traditional concept of a release is flawed for a
project the size and scope of Debian.  We need to adjust our thinking
outside the traditional definitions.

I think this proposal could actually enhance the stability of Debian
(where stability is defined as lack of bugs, not software that never
changes except for security updates), as well as further enhance the
reputation Debian maintains in the community.  

Pat



-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-14 Thread Otavio Salvador
 wesley == Wesley J Landaker [EMAIL PROTECTED] writes:

wesley On Wednesday 13 April 2005 08:12, Patrick A. Ouellette
wesley wrote:
 PROPOSAL FOR DISCUSSION:
 
 I suggest we can eliminate the traditional concept of a
 release with the addition of another step in the progression
 from unstable to stable.  Additionally, all promotion of
 packages from one step to the next will be automated according
 to strict rules.
 
 The progression I see is:
 
 unstable - testing - candidate - stable

wesley I like the spirit of this idea, although I'm sure the
wesley details need a lot of working over. (This could, but
wesley wouldn't need to *replace* releases--it could simply
wesley augment the release creation process.)

wesley I'm interested to hear other's ideas on why this is/is not
wesley a good idea, and what technical/logistical hurdles would
wesley prevent this from being done.

Maybe a better approuch could be a more restrict testing rules and
then remove the need of one temporary distribution (candidate, in
this case).

I think if we have a testing more close then now we can have it in
releasable state faster and then allow releases more frequently
but I can be wrong.

-- 
O T A V I OS A L V A D O R
-
 E-mail: [EMAIL PROTECTED]  UIN: 5906116
 GNU/Linux User: 239058 GPG ID: 49A5F855
 Home Page: http://www.freedom.ind.br/otavio
-
Microsoft gives you Windows ... Linux gives
 you the whole house.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-14 Thread Adrian Bunk
On Wed, Apr 13, 2005 at 10:12:31AM -0400, Patrick A. Ouellette wrote:
...
 The progression I see is:
 
 unstable - testing - candidate - stable
 
 The existing rules for promotion from unstable to testing continue to be
 used.
 
 Promotion from testing to candidate requires meeting the same rules as
 promotion from unstable to testing with the following exceptions:
 packages must be in testing for at least 3 months, and have no release
 critical bugs.
...

One big problem testing has are transitions. This includes library 
transitions, but also other transitions like e.g. an ocaml transition 
affecting several dozen packages currently waiting to enter testing.

Many transitions require a serious amount of manual coordination since 
all packages have to be ready to go into testing _at the same time_.

Please explain how you think any bigger transition can ever enter your 
candidate if you add to the testing criteria a 3 months criteria all 
affected packages have to fulfill at the same time?

 Pat

cu
Adrian

-- 

   Is there not promise of rain? Ling Tan asked suddenly out
of the darkness. There had been need of rain for many days.
   Only a promise, Lao Er said.
   Pearl S. Buck - Dragon Seed


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Temporal Release Strategy

2005-04-13 Thread Patrick A. Ouellette
Since I first became involved with Debian (1997 ish), people have
complained about the slow release cycle.  This has caused me to draw the
conclusion that there will always be someone who complains about the
frequency (either slow or fast) of official releases.

BACKGROUND COMMENTARY:

The advent of testing and package pools held the promise to shorten the
release cycle and improve the stability of the Debian stable release.
The debate can rage on in other circles as to the success or failure of
that statement.

There may be another way.  The institution of package pools has
essentially reduced the concept of a release to a package index file
generated on a particular day.  As long as the packages listed in the
Packages.gz file are available in the package pool, that Packages.gz
file describes a release.  The release may not be stable, and may
contain many RC bugs, but it is a definite, reproducible collection of
packages that can be installed.

The automated progression of packages from unstable to testing has made
testing a viable distribution for many users.  That is not to say
testing is suitable for all users and all tasks, but rather that testing
is frequently stable enough for many uses.  I will venture to say the
promotion process from unstable to testing is an unqualified success.

PROPOSAL FOR DISCUSSION:

I suggest we can eliminate the traditional concept of a release  with
the addition of another step in the progression from unstable to
stable.  Additionally, all promotion of packages from one step to the
next will be automated according to strict rules.

The progression I see is:

unstable - testing - candidate - stable

The existing rules for promotion from unstable to testing continue to be
used.

Promotion from testing to candidate requires meeting the same rules as
promotion from unstable to testing with the following exceptions:
packages must be in testing for at least 3 months, and have no release
critical bugs.

Promotion from candidate to stable would follow a similar pattern, with
a time in candidate requirement of 3 additional months.

Security updates are then provided for packages for 36 months after they
have been replaced with a newer version in stable.

No changes are made to experimental.

CD image generation can be run from any stage.  I would suggest monthly
image creation from candidate, and quarterly generation from stable.

For purists who insist on blessing a collection of packages proven to
work together with a release name  number, they can be satisfied if we
release driven by content changes (new libc, new desktop, whatever)
instead of the calendar.



Pat

-- 

Patrick Ouellette
[EMAIL PROTECTED]
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Amateur Radio: KB8PYM 



signature.asc
Description: Digital signature


Re: Temporal Release Strategy

2005-04-13 Thread Gunnar Wolf
Patrick A. Ouellette dijo [Wed, Apr 13, 2005 at 10:12:31AM -0400]:
 (...)
 The progression I see is:
 
 unstable - testing - candidate - stable
 
 The existing rules for promotion from unstable to testing continue to be
 used.
 
 Promotion from testing to candidate requires meeting the same rules as
 promotion from unstable to testing with the following exceptions:
 packages must be in testing for at least 3 months, and have no release
 critical bugs.
 
 Promotion from candidate to stable would follow a similar pattern, with
 a time in candidate requirement of 3 additional months.

Umh... There is a simple problem with your proposal: Most of my
packages are quite stable, yes, but some would never reach candidate
status. Try uploading a package every five days (with priority=low) -
it will never reach testing, as the old version disappears under the
new one.

Yes, this could be sorted out, so that old versions no longer
disappear until ${fateful_event}. This would create more problems: If
a RC bug report is closed, you will have to keep track of which upload
did the trick, not considering any of the ones below it for testing or
candidate. 

Finally, this would make any library migration a real nightmare :-/
You'd have to somehow keep the archive synchronized, doing something
similar to what is currently done re:testing, but on a _much_ broader
scale. Tracking dependencies and FTBFS bugs could become basically
impossible. 

...But if you come up with an implementation, I'll just shut up :)

Greetings,

-- 
Gunnar Wolf - [EMAIL PROTECTED] - (+52-55)1451-2244 / 5554-9450
PGP key 1024D/8BB527AF 2001-10-23
Fingerprint: 0C79 D2D1 2C4E 9CE4 5973  F800 D80E F35A 8BB5 27AF


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-13 Thread Patrick A. Ouellette
On Wed, Apr 13, 2005 at 11:11:13AM -0500, Gunnar Wolf wrote:
 Date: Wed, 13 Apr 2005 11:11:13 -0500
 From: Gunnar Wolf [EMAIL PROTECTED]
 Subject: Re: Temporal Release Strategy
 To: Patrick A. Ouellette [EMAIL PROTECTED],
   debian-devel@lists.debian.org
 
 Patrick A. Ouellette dijo [Wed, Apr 13, 2005 at 10:12:31AM -0400]:
  (...)
  The progression I see is:
  
  unstable - testing - candidate - stable
  
  The existing rules for promotion from unstable to testing continue to be
  used.
  
  Promotion from testing to candidate requires meeting the same rules as
  promotion from unstable to testing with the following exceptions:
  packages must be in testing for at least 3 months, and have no release
  critical bugs.
  
  Promotion from candidate to stable would follow a similar pattern, with
  a time in candidate requirement of 3 additional months.
 
 Umh... There is a simple problem with your proposal: Most of my
 packages are quite stable, yes, but some would never reach candidate
 status. Try uploading a package every five days (with priority=low) -
 it will never reach testing, as the old version disappears under the
 new one.

If you upload a package to unstable every 5 days with a low priority it
will not migrate from unstable under the current system without manual
intervention.

 
 Yes, this could be sorted out, so that old versions no longer
 disappear until ${fateful_event}. This would create more problems: If
 a RC bug report is closed, you will have to keep track of which upload
 did the trick, not considering any of the ones below it for testing or
 candidate. 

The only time you need to worry about old versions is in the final
stable tree.  If a package in stable depends on another package with a
== or = version dependency, the promotion of the new package would
break stable.  This means one of two things needs to happen: either the
old package needs to be upgraded or the old package needs to be removed.
Policy would have to be set on what the proper action is for orphaned
packages.  I don't think it too unreasonable to expect an actively
maintained package to be updated within 9 months of the upload of an
updated dependency.  I don't think it too unreasonable to remove an
orphaned package that reaches that state either.  People have complained
about the number of packages - now we have a natural method to remove
packages from the distribution that are no longer used.

A package must be in unstable for at least 2 days according to the
rules in the FAQ.  The preferred time is 10 days.  Uploading every 5
days with low priority should just replace the package with the newer
version and start the clock again.  If your package is unable to meet
the 3 months time in testing (or the 10 day time in unstable) due to 
frequent uploads, then it really is not a stable package - is it.  
That is the point of having a stable branch - it should change slowly 
and the packages in the stable area should be, well, stable.

If an RC bug report is closed, the new package is uploaded to unstable
and must run through the process.  The idea being that by the time your
package reaches candidate or stable status it really is stable and
contains no known RC bugs.

 
 Finally, this would make any library migration a real nightmare :-/
 You'd have to somehow keep the archive synchronized, doing something
 similar to what is currently done re:testing, but on a _much_ broader
 scale. Tracking dependencies and FTBFS bugs could become basically
 impossible. 

The rules for migration from unstable to testing cover the library
migration issue.  If a package has a depends on one or more other
packages, all must exist and be satisfied for migration to occur.  So we
either already have this problem, or we don't.  If we have the problem,
we need to solve it anyway

 
 ...But if you come up with an implementation, I'll just shut up :)
 

Discussion first.  Then consensus, then implementation.


Pat

--
Patrick Ouellette
[EMAIL PROTECTED]
[EMAIL PROTECTED]
Amateur Radio: KB8PYM 


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Temporal Release Strategy

2005-04-13 Thread Wesley J. Landaker
On Wednesday 13 April 2005 08:12, Patrick A. Ouellette wrote:
 PROPOSAL FOR DISCUSSION:

 I suggest we can eliminate the traditional concept of a release  with
 the addition of another step in the progression from unstable to
 stable.  Additionally, all promotion of packages from one step to the
 next will be automated according to strict rules.

 The progression I see is:

 unstable - testing - candidate - stable

I like the spirit of this idea, although I'm sure the details need a lot of 
working over. (This could, but wouldn't need to *replace* releases--it 
could simply augment the release creation process.)

I'm interested to hear other's ideas on why this is/is not a good idea, and 
what technical/logistical hurdles would prevent this from being done.

-- 
Wesley J. Landaker [EMAIL PROTECTED]
OpenPGP FP: 4135 2A3B 4726 ACC5 9094  0097 F0A9 8A4C 4CD6 E3D2


pgp3g2JDBTT46.pgp
Description: PGP signature