Re: Which packages will hold up the release?

2003-10-13 Thread Björn Stenberg
Colin Watson wrote:
 In the first example, the fact that foo depends on bar while
 simultaneously conflicting with the version of bar in the suite you're
 looking at is the thing that's bad, because it means foo can't be
 installed. The second example is OK, even though foo and bar can't be
 installed simultaneously; in an ideal world we might check that one of
 them is Priority: extra, but there's a bit too much other stuff going on
 for this to be feasible right now.

Thank you.

 These examples are a bit contrived, but there are certainly real cases
 in the archive.

My script currently only finds one:

- jpilot-backup depends on jpilot = 0.99.4-1 but testing has 0.99.2-2
- jpilot-backup conflicts with jpilot  0.99.4-1 but testing has 0.99.2-2

If you know of other cases, I'd appreciate a note so I can examine why my 
script doesn't find them.

-- 
Björn




Re: Which packages will hold up the release?

2003-10-09 Thread Björn Stenberg
Steve Langasek wrote:
 Ok.  BTW, are you taking into account the possibility of a package being
 uninstallable due to versioned Conflicts, and Conflicts between packages
 which otherwise satisfy a package's dependencies?

No, not yet. I will look into it.

-- 
Björn




Re: Which packages will hold up the release?

2003-10-08 Thread Björn Stenberg
Steve Langasek wrote:
 The term metapackage is a gratuitous label, here.  There is a real
 binary package (as opposed to a virtual package) in the archive named
 gcc, which comes from the gcc-defaults source package; and its
 versions are handled just like those of any other packages.

Ah, silly me. I was only looking in the Sources files, completely forgetting
the Packages files.

Now there's a first test implementation in place. It reads the Depends and
Build-Depends* fields and reports potential problems with those packages.

I currently don't handle the arch-specific component of dependencies properly
- those are simply stripped. Alternative packages are all checked, but there's
a prefix alternative x/y: on each line to indicate this. Also, I only use
the i386 Packages files so far.

Does anyone have a policy-compliant version comparator in Perl that I can
reuse? I'm slightly confused as to the exact meaning of 5.6.11. This means
some version compares (such as xaw3dg's 1.5+E-1 vs 1.5-25) currently return
wrong result in my script.

Speaking of xaw3d, I found a discrepancy in versions that I don't
understand. testing/main/source/Sources says xaw3d in testing is version
1.5+E-1, while testing/main/binary-i386/Packages says xaw3d in testing is
version 1.5-25. How come the source and binary packages have different
versions in testing?

I would appreciate if some of you tested this and reported cases where you
know there is a problem but my script doesn't report it. The results are
suspiciously clean, but I haven't yet been able to pinpoint a case which is
clearly wrong. The script will not say anything if it finds no problem, but
you can use the show all dependencies link to see the checks done.

The page for openoffice.org shows an example of the output:
  http://bjorn.haxx.se/debian/testing.pl?package=openoffice.org

I intend to inhibit printout if one of alternative dependencies match, but
currently all broken dependencies are displayed.

-- 
Björn




Re: Which packages will hold up the release?

2003-10-07 Thread Björn Stenberg
Steve Langasek wrote:
 Hypothetical example:
 
 29 packages wait on (151 packages are stalled by) libxml2.  This package
 is too young, and should be a valid candidate in 8 days.
 
 Suppose that the libxml2 source package provided not only the
 libxml2-python2.3 binary package, but also a libxml2-python package that
 depended on python ( 2.3).  If that were the case, then even after
 libxml2 became a valid candidate in its own right, it would still be
 held up by the python2.3 transition.

Thank you. Some followup questions:

1) How are meta packages handled, such as libz-dev that libxml2 depends on. 
There is no package or binary with that name listed in Sources.

2) How is meta package versioning handled? The gcc-defaults package, version
1.9, is the only package providing the gcc binary (without -version suffix) of
which many packages require version = 2.95.

-- 
Björn




Re: Which packages will hold up the release?

2003-10-03 Thread Björn Stenberg
Steve Langasek wrote:
 Yes, I refer to these lists frequently. :)  Thanks for putting these
 together!

Thanks for using them. ;)

 Yep, and libxml2 is also a dependency of libxslt.  But of course,
 neither of these are packages that need direct attention; the one is
 held up waiting for the other, which is only waiting because it's too
 young.  It's the related packages that need to be examined and put in
 order (by removals or NMUs), and there's no good way to figure out right
 now which packages those are, short of digging through the dependency
 tree (or running simulations).

I don't quite follow you here. What exactly would you like to see? Which
packages are waiting for the libxslt/libxml2 knot to be untied? That's
available here:
  http://bjorn.haxx.se/debian/testing.pl?waiting=libxml2
  http://bjorn.haxx.se/debian/testing.pl?staller=libxml2

 Well, if you want to write a script that can trawl the dependency graphs
 and identify work-needed packages within a cluster... :)

Could you tell me in more detail what you mean? I'm not very experienced with 
the Debian release process, so I am not familiar with the nuances. I already 
trawl the dependency tree, what information would you like to distill from it? 
(I.e. define work-needed packages and cluster.)

A hypothetical example would be good, to get me on the right track.

(I'll be away for the weekend, so I can't respond until sunday.)
-- 
Björn




Re: Which packages will hold up the release?

2003-10-02 Thread Björn Stenberg
Steve Langasek wrote:
 What's hard to see at a glance is how large collections of packages are
 interrelated in their dependencies.  Many packages that you *don't* use
 may be having a direct effect on the packages you *do* use as a result
 of their bugginess.  I'd like to be able to make as much of this
 information as possible available to developers, so they can dig into
 some of the larger package knots according to their interests rather
 than it being exclusively the domain of the RM  assistants.

I'm interested in helping with this. My why is X not in testing yet script 
attempts to identify some hot spots, in the form of a few crude toplists:

  http://bjorn.haxx.se/debian/toplist.html
  http://bjorn.haxx.se/debian/stalls.html

The first sorts packages according to which package has the highest number of 
other packages directly depend on it. Top-3: python2.3, kdelibs, qt-x11-free.

The second sorts packages according to which package stalls the greatest 
number of other packages, via dependencies in more than one level. Top-3: 
python2.3, libxml2 and libxslt.

I'd appreciate ideas and suggestions how to improve this and create other 
information digests that can help developers find and choose areas to work on.

-- 
Björn




Re: Why doesn't yehia enter testing?

2003-08-04 Thread Björn Stenberg
Andreas Rottmann wrote:
 I wonder why yehia isn't entering testing. According to [0] it makes
 qmailmrtg7 uninstallable, but qmailmrtg7 is totally unrelated to
 yehia, AFAICS.
 
 Regards, Andy
 
 [0] http://bjorn.haxx.se/debian/testing.pl?package=yehiaexpand=1

I've been on vacation, during which the update_output.txt file format was 
slightly modified. My script didn't know about the 'endloop:' entry, and thus 
was confused by this passage:

trying: yehia
skipped: yehia (142 - 0)
got: 54+0: a-54
* alpha: libgdbi-dev, libgdbi0, libgql-dev, libgql0, libgql0-driver-mysql
endloop: 546+0: a-48:a-50:h-44:i-74:i-48:m-50:m-44:m-43:p-46:s-50:s-49
now: 557+0: a-49:a-51:h-45:i-75:i-49:m-51:m-45:m-44:p-47:s-51:s-50
* alpha: qmailmrtg7

This bug is fixed now. Thanks for reporting it.

-- 
Björn




Re: Why doesn't libsidplay enter testing?

2003-07-03 Thread Björn Stenberg
Gerfried Fuchs wrote:
  Yes, I've read the testing page with the FAQ and both the
 testing_excuses and testing_output, but I can't see the reason why
 libsidplay doesn't enter testing.

I've written a little script that tries to answer precisely this type of
question. You can run it here: http://bjorn.haxx.se/debian/

For libsidplay, it says:

- Updating libsidplay makes 3 packages uninstallable on alpha: sidplay-base, 
xmms-sid, xsidplay
  - sidplay-base is waiting for libsidplay
  - Updating sidplay-base makes 1 packages uninstallable on alpha: sidplay-base
  - xmms-sid is waiting for libsidplay
  - Updating xmms-sid makes 1 packages uninstallable on alpha: xmms-sid
  - xsidplay is waiting for libsidplay
  - Updating xsidplay makes 1 packages uninstallable on alpha: xsidplay

The packages are waiting for each other, so none of them can go in. It looks
like a nudge by a maintainer is required.

 From the update_output it seems that alpha seems to have problems with
 the package 

Platforms are tested in alphabetical order, and only the first that breaks is 
displayed. That's why many packages are reported as uninstallable on alpha. 
Actually they are most likely uninstallable on many other platforms too, but 
only the result for alpha is displayed. 

-- 
Björn




Re: Why doesn't libsidplay enter testing?

2003-07-03 Thread Björn Stenberg
Gerfried Fuchs wrote:
  Thanks for the great script.  It shows me that the testing script seems
  to be buggy, because:
 
- Updating sidplay-base makes 1 packages uninstallable on alpha: 
  sidplay-base
 
  Uhm, that is somehow nonsense. How can an update of a package make
 itself uninstallable? What's the reasoning behind it?

Because it breaks testing rule #5: The operation of installing the package 
into testing must not break any packages currently in testing.

Updating sidplay-base alone breaks the current versions of xmms-sid and 
xsidplay. This is not allowed, and thus sidplay-base is uninstallable.

The solution is to update all of the packages at once, which requires manual 
intervention. As Colin Watson said, this has already been mentioned to the 
maintainer so the packages should be going in soon.

-- 
Björn




Re: Answers to Why is package X not in testing yet?

2003-05-15 Thread Björn Stenberg
Joe Buck wrote:
 However, the output is redundant in many cases.

Fixed now.

-- 
Björn




Re: security in testing

2003-05-15 Thread Björn Stenberg
Manoj Srivastava wrote:
  This is, after all, more than just a herd of cats.
 How on earth did you get that quaint idea?

From looking at Debian. It is far more structured, organised and controlled
than the great majority of free software projects out there.

 If you want a universally held firm direction, go read the social
 contract. That is as close as you are going to get.

I disagree. There is obviously a consensus on a number of important issues,
such as that making all ports adhere to the lowest common denominator is more
important than letting a few ports catch up with time.

Direction need not come from on high, it more often evolves from long
discussions in developer mailing lists. That does not mean it does not exist.

-- 
Björn




Re: security in testing

2003-05-15 Thread Björn Stenberg
Keegan Quinn wrote:
 Funny how myself and every admin I know have only very minor issues with
 running unstable.  What, pray tell, makes it such an 'obvious' non-option
 for end users?

How about constantly repeated statements to the effect?

So you did not even look at the release announcement, and yet
you run unstable. You are luck that all that happened was that you had extra
copies of mail. People had had much worse happen to them running unstable,
  -- Manoj Srivastava, linux.debian.bugs.dist, 1999-07-02

Newbies are constantly told don't run unstable by all clued users.  The
ones that persist are either very dumb, and fail. Or very intelligent, and
succeed after mastering the learning curve.
  -- Stephen R. Gore, debian-devel 2000-06-05

Don't run unstable - it's normal that unstable sometimes breaks.
  -- Adrian Bunk, muc.lists.debian.user, 2001-02-16

The real moral: if you don't have a good chance of figuring out what's
wrong on your own, and fixing, backing out of, or jury-rigging around it
without outside help... don't run unstable.
  -- Craig Dickson, muc.lists.debian.user, 2002-11-14

there are risks associated with running unstable, if you're not willing
or not able to deal with those risks then DON'T RUN UNSTABLE.
  -- Craig Sanders, debian-devel 2002-12-13

The list can be made much longer, but I think you get the idea. End users are
discouraged from running unstable, and for good reasons.

 I do like the sound of this, but saying it has a place and actually making
 it happen are very different things.  There seems to be a lot of the former,
 and little of the latter

That tired old argument doesn't bite on me since I have already volunteered to
set up a testing-i386 release. :-)

-- 
Björn




Re: security in testing

2003-05-14 Thread Björn Stenberg
Michael Stone wrote:
 All the complaints we see every couple of weeks about testing would be swept
 away if people followed this advice and simply didn't use testing.

This brings back the question I never got an answer for: Who is Debian for?

If only stable and unstable are supposed to be used, Debian excludes the vast
majority of computer users. Is this intentional? Is it desired?

-- 
Björn




Re: security in testing

2003-05-14 Thread Björn Stenberg
Manoj Srivastava wrote:
 On Wed, 14 May 2003 15:16:38 +0200, Björn Stenberg [EMAIL PROTECTED] said: 
  This brings back the question I never got an answer for: Who is
  Debian for?
 
   Isn't is obvious? Me, of course. Y'all are working to provide
  a stable environment for my machines at home. Least ways, that's what
  I am working for. ;-)

Jokes are good, but a serious answer would be good too. The question is
serious, important and unanswered. It affects who wants to use Debian, and who
wants to contribute to it.

-- 
Björn




Re: security in testing

2003-05-14 Thread Björn Stenberg
Matt Zimmerman wrote:
 There is no shortage of opinions about what we should do, but there is
 unlikely to be any action until an I arises who actually does the work.

Of course, but it's still important to discuss what should be done. Show me
the code is only a useful response if anyone in the audience actually knows
what said code is supposed to achieve.

-- 
Björn




Re: security in testing

2003-05-14 Thread Björn Stenberg
Don Armstrong wrote:
 Debian will always be for whoever the people contributing to Debian
 are willing/want it to be for. No more, no less.

Naturally, since it's free software. But saying Debian is what we make it
doesn't answer the question what you _want_ it to be, only what has been done.

Surely between the appointed leader, techical committee, policy committee,
quality assurance team and the release management, there is some sort of
shared idea of where Debian is heading? This is, after all, more than just a
herd of cats.

-- 
Björn




Re: A strawman proposal: testing-x86 (Was: security in testing)

2003-05-14 Thread Björn Stenberg
Theodore Ts'o wrote:
 So let me make the following modest strawman proposal.  Let us posit
 the existence of a new distribution, which for now I'll name
 testing-x86.

I suggested the same thing a few weeks ago, with little reaction. Nice to see
someone else got the same idea.

I'd volunteer to set it up, but I need to become a DD to access the relevant
data. Does anyone want to advocate/sponsor me?

(And, to avoid the whole testing is a release tool debate, we could call it
something else. 'trying', perhaps. ;-) )

-- 
Björn




Re: security in testing

2003-05-14 Thread Björn Stenberg
Manoj Srivastava wrote:
   Testing is a release tool. Not a distribution for random end
  users to run.

That is rather different from what is written on the web site:

For basic, user-oriented information about the testing distribution, please
see the Debian FAQ. (/devel/testing)

testing
The main advantage of using this distribution is that it has more recent
versions of software, and the main disadvantage is that it's not completely
tested and has no official support from Debian security team. (/releases/)

Nowehere does it say it's just a release tool, unsuitable for public
consumption.

What's worse, saying testing is not for public use means there is _no_ place
to get updates, since unstable is obviously not an option for end users. This
makes Debian the only linux distribution I know of that completely eschews
software updates between frozen releases (except for security fixes).

The amount of backporting and apt-pinning going on suggests not all Debian end
users are content with yearly updates. A testing-like middle ground release
for end users definitely has a place in the Debian universe.

-- 
Björn




Re: libstdc++... Help please

2003-04-30 Thread Björn Stenberg
Matthias Urlichs wrote:
 If I understood correctly, part of the problem is/was that some of the
 rebuilds simply didn't work because of problems with the new compilers.
 
 The current tools don't allow programs of arch X into testing if they fail to
 build on arch Y. I think that in general this is a good idea.

I disagree. The net effect is that the program gets less testing, resulting in 
less bugs found and fixed.

 I don't know if this is comparable, but my experience with non-ix86 Linux
 kernels is that if the people responsible for them allow their version to
 fall behind on the bleeding-edge kernel, the resulting split might be
 permanent. :-/

The big difference is that one kernel version is not allowed to stop the 
progress in all others. If a non-ix86 kernel falls behind, only that kernel 
suffers. In Debian, all ports suffer if one port breaks.

-- 
Björn




Re: pilot-link in Sid and Sarge: Much bigger question

2003-04-29 Thread Björn Stenberg
Stephen Ryan wrote:
 there is no substitute for testing in the *exact* environment that you
 plan to release in.  Period.

My point exactly. We should test packages in the environment we plan to
release: sarge. We should not let new uploads hold other packages
hostage. Because then we are only testing in sid, not in sarge.

-- 
Björn




Re: pilot-link in Sid and Sarge: Much bigger question

2003-04-28 Thread Björn Stenberg
Roland Mas wrote:
 To me, you seem to express the view that improving Debian means
 throwing away our release process, including the way testing works.

Then I have expressed myself unclearly. My apologies. I think testing is a
great idea and a most necessary institution. In fact, I wish we had more than
one level of testing.

My view is simply that the current system has weaknesses that merit discussion
in order to hopefully find improvements. I have deliberately not gone into
possible solutions yet, simply because nobody has yet agreed there is a
problem that needs solving in the first place! (Note: The lack of a solution
does not equal the lack of a problem.)

However, since you could almost stretch yourself to hypothetically
acknowledging that we're not quite in heaven yet, I'll say thanks and fire off
some thoughts for discussion:


One problem I see is the enforced binary compatibility. As long as all
programs in testing cannot be upgraded without also upgrading to the latest
shlib'ed version of all used libraries (which tend to be notoriously stuck in
unstable), bug fixes for individual programs don't reach testing.

After all, it's sarge that's the release candidate right? Not sid. So why is
sid allowed to dictate dependencies that sarge must conform to?

This is one reason why testing is hopelessly behind on small fixes such as
security patches. A security patch can be changing strcpy to strncpy in two
lines of code. Yet that simple fix will get stuck in unstable if any of the
libraries the fixed program uses has updated their shlib dependency to an
unstable version.

Kill holy cow #1: Binary compatibility. Testing is a separate release, treat
it as such. Branch it off and set up it's own buildd server. Build packages in
testing with tools and libraries from testing. Don't use binary packages from
unstable, recompile them. Make sarge, not sid, the reference environment for
sarge.


A second problem is enforced platform compatibility. It creates a
lowest-common-denominator problem of the kind so often frowned upon in other
situations. Any one platform can keep all the others from progressing. Let's
take arm as an example. How many people are running the arm port of debian,
compared to i386? Is it really in the best interest of the project to keep
lots of software from being tested due to build errors on a single port?

Yes, I too want each stable release to work on all official ports. But what is
the most efficient way to get there? Surely testing software on the ports it
works on is better than not testing it at all?

Kill holy cow #2: All-port concurrency in testing. Make a testing-i386
release, where admittance rule #2 is replaced with It must be compiled and up
to date on i386. Statistics posted to this list earlier show that i386 has
the lowest number of build problems of all ports. And I think it's safe to say
that it has the highest number of users. Combine the two, and you get stuff
tested. A lot.


Both of these options naturally come with several drawbacks and
complications. All models have them, including the current. But somewhere
there's a breaking point where the advantages outweigh the drawbacks and you
get a better system, producing better software.

 people prefer bitching and complaining about testing being late and stable
 being old, rather than helping fixing bugs.

Perhaps one reason is that fixing enough bugs to get stuff into testing is
currently a whack-a-mole job? With so many dependencies changing all the time,
there is no solid ground. Once you've fixed the showstoppers in package A,
package B uploads something which breaks, you fix that, then C uploads, you
fix that, then a new version of A pops up again...

We're trying to stuff everything in at once. It can be done, but it's very
difficult and requires a good deal of luck or a freeze.

 The problem is not that this process requires software to be tested and
 declared relatively bug-free before they are admitted into testing.  The
 problem is that the software is not even remotely bug-free.

I have adressed this once already. This is only half the truth. Packages are
also stuck in unstable not because they are buggy, but simply because one of
their dependencies evolved so much that their interface changed, breaking the
binary compatibility.

 And it is so at least partly because people try to put new versions of
 software into Debian, which means the system integration and the
 synchronisation of programs and libraries are an uphill battle.  And it is
 so at least partly because people complain as soon as there's a new upstream
 release, thus delaying the testing of the whole system.

Frequent uploads would not be as much of a problem if each new upload were not
immediately used for testing other packages. See whack-a-mole above.

Distributed development is fast. That's one of the many benefits of Free
Software. Any system designed to harness this flow of creativity must take the
volatility into account and take 

Re: pilot-link in Sid and Sarge: Much bigger question

2003-04-28 Thread Björn Stenberg
Matthew Palmer wrote:
  Perhaps one reason is that fixing enough bugs to get stuff into testing is
  currently a whack-a-mole job?

 I don't think your proposals will really fix that, since in my experience
 that new version of A probably requires all sorts of new crap from B
 anyway...

Does it, really? Or does it simply have binary dependencies to an unstable
version of B, imposed by B? If one version of A and B has been accepted in
testing, chances are pretty good that the next version of A will compile and
work fine against that old version of B. Most new versions are fixes, either
of bugs or features. Changes that break source level compatibility are rather
rare.

-- 
Björn




Re: pilot-link in Sid and Sarge: Much bigger question

2003-04-27 Thread Björn Stenberg
Matthew Palmer wrote:
  it's labour-intensive, it's pretty damn effective.
   ^
 
 And there's why it doesn't work for Debian.  We don't have money to throw at
 our developers.

I never claimed we should. I merely explained one of the many reasons Debian
is fundamentally slower than other distributions. I also, as you prominently
underlined, explained why this solution is not an option for Debian.

 Face the facts: Debian Is Different.  No amount of complaining about it will
 change that.

So Debian is perfect? No need to discuss problems and possible solutions?
Maybe we should close the bug tracker too. It's telling that pointing out a
problem is automatically defined as complaining. What is this, a sect? I
though this was the developer list.

 You're targetting a different audience than Debian is, so that's probably a
 good way to scratch your itch.

Right, then please define the Debian target audience for me. To end up in the
current state, the list would have to look something like:

1) Developers (who can run unstable)
2) Server admins (who don't need new software)

Is this really what you want? Because then we (or, in that case, you) should
post it in big not-so-friendly letters on the front page so the rest of the
world, including developers, can make an informed choice.

 And? Debian appears to have chosen High Quality over Bleeding Edge.

Actually, Debian has chosen Portability over Quality. Quality means a lot more
than just fixing bugs, you know. A program that does not work with current
data or devices has low quality even if it doesn't crash. The mere age of most
packages in stable is a very serious quality flaw.

And no, before you build that straw man, I'm not saying Debian should contain
more bugs. I'm saying we need to rethink the system, because today we don't
even let the bug fixes in!

 If you don't like that choice, you can choose to use something different, or
 roll your own.

How about improving Debian? Or is that heresy?

 I'd install Debian across the board if a Linux solution was called for,
 because it's a stable, reliable, functional platform, which is *exactly*
 what what companies need.

That is correct, but not complete. Companies also need software to do their
day-to-day work. Software such as cross-platform word processors, spreadsheets
and browsers that can actually show their client's home pages. Debian does not
offer this. Yet not one of you will admit that this is a problem.

 Eye candy comes a distinct 20th (or lower).  I've worked in places which
 are still using greenscreens, because stability, reliability, a known UI,
 and a lack of distraction, is far more important than the latest whizz-bang
 shiny! hey look my monitor's melting GUI.

It is more than a little arrogant to claim all the changes since woody is just
eye candy...

I might as well say this too, to head off your assumptions: I am a
developer. I run unstable. I don't want to run testing. This is not a personal
gimme issue, I already have everything I want. But this issue, this system
flaw, lowers the overall appeal of Debian.

I like Debian. I like the voluntary nature and the Free Software ideals behind
it. I like apt. I've been planning to contribute in various ways. But if the
general consensus among debian developers is that 90% of the computer-using
public is too stupid to care about, I don't think I'll bother. Heck, I've got
what I want, why should I care about the rest of the world? Right?

The social contract says We will be guided by the needs of our users. I
guess it all comes down to the definition of our users.

-- 
Björn




Re: pilot-link in Sid and Sarge: Much bigger question

2003-04-26 Thread Björn Stenberg
David Nusinow wrote:
 You say you can't deal with unstable because the software is broken.
 Well, that's because the software you want isn't ready to be released.

That's not the whole truth. A _lot_ of software is ready and working, but is
held back from entering sarge due to dependency problems that other distros
simply don't have. This is an issue I have been probing the last weeks.

An example: Before gcc-3.3 and gcc-3.2 went in the other day, no less than 607
packages were stuck in unstable waiting for them. How many of those packages
actually required gcc 3 to compile and run? I'd guess not many.

While debians binary compatibility demands can be argued as more elegant or
right than simple source dito, they definitely are more complex and lead to
dependency problems that hold back working software.

 Sarge simply isn't ready to release.

Damn right it's not, sarge is way too _old_ for release. Mozilla 1.0.0? No
openoffice.org? Gnome 1? KDE 2.2?

One difference, good or bad, between Debian and commercial distributions is
the lack of branches above stable. When commercial distro X makes a release,
they pick the last-known-good versions of all the packages they want, compile
it all, change a few versions, compile again and ship the product. It's crude,
it's labour-intensive, it's pretty damn effective.

In Debian, it's not uncommon for a great number of packages to need to have
their latest versions bug free at the same time to reach testing and thus be a
release candidate. If any one of the required packages introduce a bug, the
whole house of cards falls down. For packages with a lot of external
references, a testing release is almost impossible to achieve without a
coordinated freeze. Look at Mozilla, it's had 16 uploads since the last one
that was admitted into testing. Why was none of those 16 accepted into
testing? It's not because all of those releases were more buggy than the
testing version is (lord knows 1.0.0 is barely usable.)  It's due to
ever-breaking, and ever-increasing, dependencies. Mozilla now even requires
gtk+ 2.0. I bet that's news to the people running it on KDE, or gnome1, or
win32.

Don't get me wrong. The current system works great. It produces some really
high-quality releases, for a truckload of platforms. The only catch is the
software in those releases is two years old.

Before brushing this aside as an uninformed rant, stop for a moment and
consider which release you'd recommend your computer-savvy-but-no-programmer
friends to use when they want to run linux at work. Then tell me there is no
problem.

-- 
Björn




Re: Some questions about dependencies

2003-04-25 Thread Björn Stenberg
Simon Huggins wrote:
 I have a feeling you want to discover update_output.txt

Thanks, I have looked at it before but apparently not enough. Some followup
questions:

update_output.txt says:

trying: cfitsio
skipped: cfitsio (1144+9)
got: 13+0: a-4:a-9
* arm: fv, libcfitsio-dev, libcfitsio2

If I'm reading /devel/testing right, this means libcfitsio2 becomes
uninstallable on arm if cfitsio goes into testing. But libcfitsio2 is part of
cfitsio. What does this mean?

Later it says:

trying: cfitsio
accepted: cfitsio
   ori: 55+0: a-4:a-6:h-6:i-7:i-4:m-4:m-6:m-6:p-3:s-4:s-5
   pre: 54+0: a-4:a-5:h-6:i-7:i-4:m-4:m-6:m-6:p-3:s-4:s-5
   now: 54+0: a-4:a-5:h-6:i-7:i-4:m-4:m-6:m-6:p-3:s-4:s-5
  most: (110) .. xmms-msa xplanet xprint-xprintorg xwelltris/arm xwelltris/mips 
zope zsh-beta ztutils aolserver-nscache/arm aolserver-nsencrypt/arm apmd 
auto-apt bbappconf bbpager bincimap blender blt boost brltty cfitsio

What does this mean? Accepted, yet still breaks a bunch of packages? Is this 
the attempt to add it, showing what breaks? Then why is cfitsio on the list?

Other confusing entries:

trying: tcpdump
skipped: tcpdump (1074+1)
got: 6+0: a-6
* alpha: dhcpdump, tcpdump

trying: libpcap
skipped: libpcap (792+231)
got: 75+0: a-75
* alpha: [...] tcpdump [...]

Adding the new libpcap apparently breaks tcpdump, yet 
http://packages.qa.debian.org/t/tcpdump.html says tcpdump is waiting for the
new version of libpcap?

I'd like to read the source so I can find out some of the details from that. 
Where should I start? I've looked at http://cvs.debian.org/testing/?cvsroot=dak 
but which files are ran where, and when? A peek at a relevant crontab or such 
would be helpful.

And, yes, I intend to write some sort of friendly help page once I manage to 
wrap my head around this.

Thanks.

-- 
Björn




Re: Some questions about dependencies

2003-04-25 Thread Björn Stenberg
Rene Engelhard wrote:
  4) What is the best way to find out why cfitsio depends on gcc-3.3?
 looking in the Packages files?
 (do in on auric directly or d/l them and do it at home)

Where can I download these files? (Packages is a tricky word to search for...)

Thanks.

-- 
Björn




Some questions about dependencies

2003-04-24 Thread Björn Stenberg
Hi.

I'm trying to understand the dependency system so I can find the complete and
correct answer to questions in the form why is the latest version of package
X not in testing yet. I've been annoying some of you with wrong assumptions
previously, sorry about that. I'll avoid that this time and just ask a few
questions.

I know about the five rules, valid candidates, the excuses page(s) and
packages.qa.debian.org. I'm mostly looking at valid candidates and there are
a few things that I don't understand:

1) Debcheck often reports Package declares a build time dependency on [pkg]
which cannot be statisfied on [port]. Must all such build time dependencies
be resolved before a package is accepted into testing?

2) If (1) is true, how come some packages have unresolved builddeps for
testing and stable? New ports?

3) If (1) is false, where do I look to see why libpcap is not in testing yet?
It is a 260 days old valid candidate with no unresolved dependencies.

4) What is the best way to find out why cfitsio depends on gcc-3.3? I can find
it by searching through all buildd logs and knowing that libgcc1 is part of
gcc-3.3. Is there an easier/quicker way?

5) Both gcc-3.2 and gcc-3.3 appear to contain libgcc1. Why does cfitsio depend
on gcc-3.3 and not gcc-3.2? Because gcc-3.2 depends on gcc-3.3?

(As it currently looks, both gcc-3.2 and gcc-3.3 are actually valid candidates
so maybe the example packages above will finally go into testing soon.
If not, why?)

Thanks.

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-22 Thread Björn Stenberg
Colin Watson wrote:
 The reason why a library's shlibs get changed
 is that binaries built against one version of the library can't be
 guaranteed to run correctly against older versions.

Because the interface changed or because the previous version was buggy?

I have always assumed the first reason, but it seems many maintainers are
using the second.

While moderately helpful to users of unstable, using shlibs to push bug
fixes can be very destructive to the testing release. It stops other
packages from getting in, while not always fixing the bug anyway (if the
fixed version gets stuck in unstable, which is not uncommon).

I found only little in the debian developer manuals detailing how version
dependencies should, and should not, be used. Did I miss a section about
this, or is there a general consensus about the issue?

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-16 Thread Björn Stenberg
Colin Watson wrote:
  forcing other packages to always depend on the latest version of them
 No, that's not what shlibdeps do.

Right, it does not force the latest, only the version that is installed on
the machine it runs on. But isn't the effect essentially the same, since most
people/robots that build for unstable will likely have farily recent library
versions?

If whatever library versions are present at the time of building are defined
as the minimum requirements, doesn't that mean that packages which are in
stable today would not even be accepted into testing if they were rebuilt?

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-16 Thread Björn Stenberg
Colin Watson wrote:
 No, that's not what shlibdeps do either. See:
 
   
 http://www.debian.org/doc/debian-policy/ch-sharedlibs.html#s-sharedlibs-shlibdeps

Lovely, so it's simply the other way around (as Adam Conrad said already).
Thanks.

However, it still means packages get bogus dependencies that keep them out
of testing. Even if the package in question was already accepted in stable.

Let me be blunt and ask: Is this a we don't care, go away issue or why is
this so difficult to discuss? If it was a frequently asked (and answered)
question, I would expect a ton of list archive links to have been dumped on
me by now. I have no qualms about squeezing blood from stone, but it doesn't
exactly speed the discussion nor minimize my annoying questions.

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-15 Thread Björn Stenberg
Matthias Urlichs wrote:

Depends: libc6 (= 2.3.1-1), libgcc1 (= 1:3.2.3-0pre6), ...
 Note that the version shown is simply the current libgcc.so version.

Current as of when? When the upload was done?

 dpkg-shlibdeps has no idea whether an older version would be sufficient,
 so it plays safe.

Isn't this a problem? Especially for packages depending on libraries with
long release cycles, such as libgcc1 and libc6.

Note: I don't have a suggestion for a better method right now, I'm just
trying to understand the implications of the current system.

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-15 Thread Björn Stenberg
Jim Penny wrote:
 Björn Stenberg wrote:
  Isn't this a problem? Especially for packages depending on libraries with
  long release cycles, such as libgcc1 and libc6.
 
 Not often.  Most slow release libraries are strongly backwards
 compatible.

That was my point. Since these libs are strongly backwards compatible,
forcing other packages to always depend on the latest version of them
means packages are held back for invalid reasons.

-- 
Björn




Re: Why is only the latest unstable package considered for testing?

2003-04-14 Thread Björn Stenberg
Bob Proulx wrote:
 Just minor searching through the archive turned these up with relevent
 discussion. 

These posts, as your reply in debian-testing, concern packages that are not
Valid Candidates.

My question concerns perfectly working packages that are suitable for testing,
yet are never considered.

I did search the archives, but failed to find a post addressing this issue.

-- 
Björn




Re: curl, testing and gcc-3.2 (?) (was Re: Debian curl package depends on gcc-3.2?)

2003-04-14 Thread Björn Stenberg
Anthony Towns wrote:
 You'll find it does on arm and probably one or two other architectures,
 but in particular:
 
 Package: libcurl2
 Architecture: arm
 Version: 7.10.4-1
 Depends: libc6 (= 2.3.1-1), libgcc1 (= 1:3.2.3-0pre6), ...

Sorry for being a pain, but how are these dependencies assigned?

libgcc1 is already in both testing and stable. How is it decided that
libcurl2 requires this specific version? It appears neither the package
maintainer nor the upstream author made this decision (or even knew about it).

Also, the arm build log for arm contains the following line:

checking whether to use libgcc... no

-- 
Björn




Why is only the latest unstable package considered for testing?

2003-04-13 Thread Björn Stenberg
Hi.

(I first submitted this question to debian-testing, but was referred here for 
discussion.)

I have been looking at the excuses page[0] and was struck by how very old 
some packages are in testing, yet only the very latest bleeding edge version 
from unstable appears to be considered for inclusion.

Am I misunderstanding something, or does this approach punish projects that 
adhere to the Open Source motto to release often?

Hypothetical example:

Project X makes an effort to prepare a solid release, squashing all RC bugs and 
making sure each target builds flawlessly. They pack it up, label it 3.0 or 
whatever and release it. The package goes into unstable and, being a 
non-critical update, needs 10 days to become a Valid Candidate[1] for testing.

For a while, people have been working on a big patch to move project X from 
gnome to gnome2. This was submitted to the project but was delayed until after 
3.0. Now that 3.0 is out the door and the users have a stable version to work 
with, the gnome2 patch goes in and a new version, 3.1, is released only a few 
days later. This version is not a valid testing candidate, since gnome2 is not 
yet included in testing, but it's a welcome update for those running 
gnome2/unstable.

Now the catch: Since the testing scripts only consider the latest unstable 
version, the testing-ready 3.0 version is never considered. Instead, the 3.1 
version is rejected (due to depending on gnome2) and the old 2.0 version is 
kept.

Is there a good reason for this? Would it not be better to track all versions 
of a package and include the latest (if any) that fulfills all requirements? It 
seems to me that the current system leaves testing with older versions than 
necessary.

Note that I'm not advocating relaxing the inclusion criteria for testing. I am 
just asking why they are not applied equally to all versions.

[0] http://ftp-master.debian.org/testing/update_excuses.html
[1] http://www.debian.org/devel/testing

-- 
Björn