Re: Potato now stable
Hi first I'm just a debian-user, if you guys don't mind my 2 cents then here it is: I think a task packages is a bad approach to the too-many-packages problems. The organisation of the packages shouldn't be part of the dependency system, IMHO. This organization is intended to help clue-less debian-user (like me) to navigate the overwhelming debian archive and selecting the desired packages without much effort. This organisation should be implemented orthogonally to the dependency system, such as: +---+ /| | The DAG of Debian | |\ / | | packages with | | \ task view | | | dependencies as | | | project view \ | | vertices | | / \| | | |/ +---+ \ / -- functionality view A view is almost-tree structure such as: functionalitytooluix11-windows-managersawfish | | | |--wmaker | | | | | |--x11-screen-saver---xlockmore-gl | | | | | |--gtk-engines---gtk-engines-thinice | | | |--wwwbrowsercommunicator | |--serverapache | |--command-line-client---wget | |--apps---officefinancialemma |--gnucash |--gnumeric task---all-pythonpython-corepython-base | |--python-doc |--python-devpython-zlib |--python-gdbm ... projectpythoncore---python-base | |--doc---... | |--libs---... | |--ui... | |--... | |-gnomecore |--ui-themes |--libs---... |--apps... Selecting a group in a view will select all sub-nodes. Unselecting a node won't affect the selection of its parents. That is, the arrows in the views are group relationships and not dependencies. The sawfish package would have a new control line such as: Package: sawfish Groups: project.gnome.ui.windows-manager, task.gnome-all.windows-manager,functionality.tool.ui.x11-windows-manager Package: libncurses5 Groups: project.ncurses.ncurses5.lib,task.ncurses.runtime.ncurses5, functionality.runtime-lib.ui.console.ncurses Just my confused and naive 2 cents. L.D.
Re: Bug tracking system and testing distribution Re: Potato now stable
Joey Hess [EMAIL PROTECTED] writes: Christoph Martin wrote: So, what is the policy to do with a package for the testing distribution, if there is an important bug? Do you remove the package unconditionaly or do you try investigate (like in the rc buglist) if the bug really applies? Well if I were AJ I would just mechanically assume critical bugs are really critical, placing the onus on the package maintainer or any other interested parties to correct the status if it happens to be wrong. But we can do some things on improving the bugtracking system for some more automation. If the bug tool would also report the binary architecture that would be at least a hint for the maintainer. The maintainer should have a possibility to set an binary-port attribute for a bug report to all or a list of ports, so that the automatic scripts can find out if a bug applies to the testing distribution. The default however should be set to all, because if you find a bug you at first don't know if the bug is also in other environments. A similar scheme should be there for the version number. The bug tracking system should have a changeable field for the version. Perhaps it should record separately the reported version. The maintainer should be able to change the version field, to show if a bug applies also to older or newer or specific version. If we have no such information here we must suppose that the bug applies to all architectures and all versions. Christoph -- Christoph Martin, Uni-Mainz, Germany Internet-Mail: [EMAIL PROTECTED] --export-a-crypto-system-sig -RSA-3-lines-PERL-- #!/usr/bin/perl -sp0777iX+d*lMLa^*lN%0]dsXx++lMlN/dsM0j]dsj $/=unpack('H*',$_);$_=`echo 16dio\U$kSK$/SM$n\EsN0p[lN*1 lK[d2%Sa2/d0$^Ixp|dc`;s/\W//g;$_=pack('H*',/((..)*)$/) #what's this? see http://www.dcs.ex.ac.uk/~aba/rsa/
Re: Implementing testing (was: Re: Potato now stable)
On Thu, Aug 17, 2000 at 10:17:30PM +1000, Anthony Towns wrote: Hello world, So, on -devel-announce, I mentioned: * New testing distribution [...] So some more details. The way testing is supposed to work is to have three distributions at any one time: a stable tree, a testing tree, and an unstable tree. As we make releases and such, this'll look roughly like: stable testingunstable ~~ ~~~ potato woody sid (when testing's rolled out) woody [foo] sid (when woody's released) [foo] [bar] sid [bar] [baz] sid So basically we're splitting the development tree and the release candidate a little. Not really very *much*, the release candidate's still heavily based on the development tree, so they're by no means independent, but hopefully the separation will be useful. This probably changes the way we deal with unreleased architectures a bit too. Architectures like sparc64, mips, mipsel, hurd-i386 and the forthcoming superh are all in development, but aren't release candidates yet. As such, it will presumably be appropriate to leave them in unstable, without linking to them from testing, at least until they're ready for release. This is fairly similar to the current motivation behind sid; hence the reuse of that existing distribution rather than creating a completely different one. As far as maintainers go, they more or less just need to keep uploading to unstable. They still need to be careful to only upload things that are more or less ready for release, it's not really reasonable to have two different forks of a package in the different distributions (as it is for stable/unstable or even for frozen/unstable). Basically, the version is testing simply won't get updated until the problems with the version in unstable are worked out. If a maintainer wants to be a bit more careful about gettig their software ready for release, they can look at reports like that at http://auric.debian.org/~ajt/update_excuses.html to see if testing's noticed any problems. (At the moment, these aren't mailed to maintainers or anything? Should they be? They're all (supposedly) worthy of an RC bug, so in many cases the maintainer will already have been notified because a bug will have been filed) If a maintainer specifically *doesn't* think a package should be considered a release candidate just yet, then all s/he has to do is file an important bug against the package, and it'll be held in unstable while that bug's opened. Cheers, aj -- Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/ I don't speak for anyone save myself. GPG signed mail preferred. ``We reject: kings, presidents, and voting. We believe in: rough consensus and working code.'' -- Dave Clark pgp6oGKzOhEV6.pgp Description: PGP signature
Re: Potato now stable
On Fri, 18 Aug 2000, Anthony Towns wrote: Presumably sections and tasks will both be subsumed by this. I think these should probably be handled differently: saying I want the games task should probably default to installing all; whereas you'd probably not want to say I want the games section and have all of them installed. Well, is this really an issue? If we maintain the taks-* prefix it becomes clear to the user.. Maybe someone will want to install a full section - especially if our sections become significantly more useful! Changing the meaning of Section like this is probably dependent on getting dinstall rewritten and the archive restructured first. Hm, Possibly. I'd have to ask James of course. be installed. The UI tool will track when new packages are added to groups and present that information in conjunction with the traditional new packages display. This sort of behaviour probably wouldn't be suitable for sections. Are there any other grouping style things apart from sections and tasks that we can consider? Why? Right now our sections are pretty useless because they have too wide a mismatch of things in them. But that doesn't have to remain true. This makes the extra priority not really fit in though: while you can (in theory) install all packages of any of the other priorities you specifically *can't* do this with packages in extra. This priority is True - eliminate it would be my answer. 'extra' packages are gouped into a view by sections or by name - but not by priority. I suspect you'd want a different interface to play with priorities than with tasks though, too. Possibly, I don't know.. (if you *really* group everything into just one way of doing things), but I think this would probably require icky handling on behalf of apt or dselect. It probably *would* make it much easier to introduce new styles of groupings in future though. If people want to see this then internally I will convert all groupable things into whatever the internal group representation is - that makes it much, much, much simpler to deal with. It isn't so important if that is done in the archive or not. Do people like this idea? I mean - if nobody cares I'm certianly not going to spend any time on it. Jason
Re: Potato now stable
On Tue, Aug 15, 2000 at 12:28:12AM -0600, Jason Gunthorpe wrote: Well, this is what I was trying to say before - logically it makes alot of sense if packages are members of groups, this is the reverse of what we have now - a list of packages in a group. Delivery and storage of this data has *lots* of options.. Let me outline more clearly how I think task packages should work from a users POV: The user should see a list of groups (I will call them this because I think groupings can be more general than just tasks). The UI tool will allow sorting and searching of the groups and when browsing individual packages it will be possible to see what groups they are part of. Presumably sections and tasks will both be subsumed by this. I think these should probably be handled differently: saying I want the games task should probably default to installing all; whereas you'd probably not want to say I want the games section and have all of them installed. So maybe we'd rather cope with this by having different tags with similar syntax, but different semantics, similar to the Depends/Recommends/Suggests split. So: Package: psdoom Section: games, admin Task: bofh Description: System administration with a rocket launcher might be the way to go. Changing the meaning of Section like this is probably dependent on getting dinstall rewritten and the archive restructured first. The user can select that a group is of interest to them and mark it for 'installation'. Once done this means all packages currently in the group will be installed and all new packages added to the group in future will be installed. The UI tool will track when new packages are added to groups and present that information in conjunction with the traditional new packages display. This sort of behaviour probably wouldn't be suitable for sections. Are there any other grouping style things apart from sections and tasks that we can consider? Important/standard/etc priorities would become mega-groups, most people would run with important and standard set to install - [like dselect does], but this becomes optional - and much more controlled. Asked and answered. Heh. This gives us a good excuse to actually follow policy and make all optional (and above) packages simultaneously installable, too. This makes the extra priority not really fit in though: while you can (in theory) install all packages of any of the other priorities you specifically *can't* do this with packages in extra. This priority is more like the sections we have: it's useful to be able just view extra packages, but all it is is a way of seperating them out, rather than a grand unifying property of the packages. Also, unlike sections and tasks, a package can't be in two different priorities at once. So that's essentially the same semantics as tasks, but a different syntax. A slightly more orthogonal way of doing it might be more like: a) Essential? b) Standard part of a Unix system? c) Can all be installed together without conflicting So dpkg might be (a) but not (b), while nvi might be (b) but not (a). Required is just (a), Standard is (a) and (b), Optional is (a), (b) and (c), and Extra is whatever's left over. Important is ignored as being silly. I suspect you'd want a different interface to play with priorities than with tasks though, too. The other way of doing it that springs to mind, might be: Package: freecraft Groups: priority-optional, task-games/networking, section-games (if you *really* group everything into just one way of doing things), but I think this would probably require icky handling on behalf of apt or dselect. It probably *would* make it much easier to introduce new styles of groupings in future though. Cheers, aj -- Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/ I don't speak for anyone save myself. GPG signed mail preferred. ``We reject: kings, presidents, and voting. We believe in: rough consensus and working code.'' -- Dave Clark pgpuvcicJOK6Y.pgp Description: PGP signature
Re: Implementing testing (was: Re: Potato now stable)
Joey Hess [EMAIL PROTECTED] writes: It's beautiful. I want it now. :-) I couldn't agree more. We could always fine tune it when we know how it works with live data. But I think you'right. Some way of chrash-install into testing would be nice when dealing with root-exploits. -- Peter
Re: Implementing testing (was: Re: Potato now stable)
On Thu, Aug 17, 2000 at 10:17:30PM +1000, Anthony Towns wrote: Automated Process? ~~ So pretty much all the policy is encoded in some automated process which updates testing. It works at the moment, basically as follows: 1. First, it loads up all the Sources and Packages files in testing and unstable. 2. It compares and contrasts them, working out what source packages are new in unstable. 3. For each of these new source packages it checks: a. That the package has had two weeks of testing, or it's a medium or high urgency package (and has had either one week, or three days of testing). b. That each binary has been recompiled for each arch it's on. c. That each binary has 0 RC bugs, or fewer than the testing version does [4]. 4. It then collects the source packages that pass 3, and tries installing them in various combinations to see if the number of uninstallable packages in testing either drops or remains the same. If so, they're in. If not, they're not. I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? Because a package lying for 3 weeks in unstable says nothing about it being bug-free if no one uses it... but if unstable is now going to be really unstable, I can see lots of the people who currently use 'unstable' using 'testing' instead, satisfying their need for bleeding-edge.. Jules
Re: Implementing testing (was: Re: Potato now stable)
On Fri, Aug 18, 2000 at 10:34:35AM +0100, Jules Bean wrote: I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? Because a package lying for 3 weeks in unstable says nothing about it being bug-free if no one uses it... Well, you've given the first reason why people would want to run unstable right there: to find bugs and make sure testing's as stable as possible. People might want to do this out of the goodness of their hearts', or they might be running a server on testing and want to run a desktop machine with unstable to make sure their server doesn't accidently upgrade to something hideously buggy. but if unstable is now going to be really unstable, I can see lots of the people who currently use 'unstable' using 'testing' instead, satisfying their need for bleeding-edge.. Note that there's no particular reason for unstable to be any worse than it is now. Your worst case is that there'll still be bugs in testing that won't be detected before they hit a whole bunch of people. This is no worse than our current situation with unstable. Another reason to run unstable is to live on the actual bleeding edge: testing will always be around two weeks out of date. That can be a fair while, if you're impatient. Supporting this, there's some Apt changes in CVS that'll let people choose a few packages from one distribution and leave the rest from another. Two possibilities come to mind: either running testing most of the time, but using a bunch of programs from unstable because you're interested in their development; or running mostly from unstable except for a few packages you can't afford to have break on that machine. Either way you have a slightly larger buffer between an upload and it making it into testing. *shrug* If it turns out to be a problem, I think it'll end up being mostly self correcting. And even if it's not, we're still in a better situation than we are now because some bugs *definitely* won't make it into testing. Cheers, aj -- Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/ I don't speak for anyone save myself. GPG signed mail preferred. ``We reject: kings, presidents, and voting. We believe in: rough consensus and working code.'' -- Dave Clark pgpZ9GTZNr2yc.pgp Description: PGP signature
Re: Implementing testing (was: Re: Potato now stable)
On Fri, Aug 18, 2000 at 09:26:34PM +1000, Anthony Towns wrote: Another reason to run unstable is to live on the actual bleeding edge: testing will always be around two weeks out of date. That can be a fair while, if you're impatient. Supporting this, there's some Apt changes in CVS that'll let people choose a few packages from one distribution and leave the rest from another. Two possibilities come to mind: either running testing most of the time, but using a bunch of programs from unstable because you're interested in their development; or running mostly from unstable except for a few packages you can't afford to have break on that machine. Either way you have a slightly larger buffer between an upload and it making it into testing. This seems to me to be important. Debian developers typically have to use unstable, or some of it, if they want to compile packages for unstable. This would allow us to pick-and-choose. For example, I maintain a couple of GTK packages and a GNOME package. I could use 'testing' mostly, but I could install 'unstable' versions of libgtk-dev, libgnome-dev. Then I can recompile balsa as necessary when new versions come out, and this might go some way towards eliminating the massive brokenness which strikes unstable when a new incompatible library version comes out (remember the libgtk1.1.17 era?). At least testing would be (almost) guaranteed a consistent set of packages. Also it's considerations like this which prompted me to start the 'gnome-staging' project when we had that big gnome upgrade. *shrug* If it turns out to be a problem, I think it'll end up being mostly self correcting. And even if it's not, we're still in a better situation than we are now because some bugs *definitely* won't make it into testing. Yes, you're right. Your suggestions certainly can't make anything worse than it is ;-) And they will probably make things better. I'm not convinced they're going to be the silver bullet for speeding our releases, but practical experience suggests that theoretical ponderings about the release process are almost pointless ;-) Jules
Re: Implementing testing (was: Re: Potato now stable)
Jules Bean [EMAIL PROTECTED] wrote: I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? I for one like the bleeding-edge. I like stuff that breaks, because I get to fix it. I like filing bug reports. There are other twisted people like me. I will run unstable. -- Don't worry -- shop.
Re: Implementing testing (was: Re: Potato now stable)
On 18-Aug-00, 06:26 (CDT), Anthony Towns aj@azure.humbug.org.au wrote: Supporting this, there's some Apt changes in CVS that'll let people choose a few packages from one distribution and leave the rest from another. To whoever implemented this feature: ThankyouThankyouThankyou -- it's something I've wanted to do for a long time. Steve
Re: Implementing testing (was: Re: Potato now stable)
On Fri, 18 Aug 2000, Edward Betts wrote: Jules Bean [EMAIL PROTECTED] wrote: I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? I for one like the bleeding-edge. I like stuff that breaks, because I get to fix it. I like filing bug reports. There are other twisted people like me. I will run unstable. Actually, unstable is wonderful for desktop machines -- Debian developers are conservative enough that the programs in unstable are actually pretty stable, and on the desktop lastest and best is way more important then rock solid (IOW, on the desktop I'm prepared to have applications crash at the rate on seen in unstable, meaning just enough to be able to file a bug report every now and then) -- Moshe Zadka [EMAIL PROTECTED] There is no IGLU cabal. http://advogato.org/person/moshez
Re: Implementing testing (was: Re: Potato now stable)
On Fri, Aug 18, 2000 at 10:34:35AM +0100, Jules Bean wrote: I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? In my case curiosity to test new stuff without having to deal with the other side(rpm based distros) and the assurance that all new packages even if they're buggy fall into the Debian's way of putting together a distro.Don't forget that most people don't have the time/incentive to learn a new distro and Debian stable is pretty outdated for a desktop most of the time (see slink).I also find that testing new stuff make you learn and research things that otherwise you really wouldn't care about.Learning new stuff is good.:-) And btw at HDD prices this days it's quite easy to keep a stable install just in case you need the computer when unstable won't even boot. I for myself will install unstable next week now that I have a working Stormix install(really just Debian+some extra stuff I don't use). -- The best way to escape from a problem is to solve it. Alan Saporta My waste of cyberspace= http://deepblue.dyndns.org :-)
Re: Implementing testing (was: Re: Potato now stable)
Anthony Towns aj@azure.humbug.org.au writes: Another reason to run unstable is to live on the actual bleeding edge: testing will always be around two weeks out of date. That can be a fair while, if you're impatient. At best. Please remember there are some maintainers that will have to be forced to take a two week and one day vacation in order for their packages in unstable to get two weeks old. /me ducks! Marcelo
Re: Implementing testing (was: Re: Potato now stable)
I'd just like to bring up the only point which really worries me about all this... what is the incentive for people to run their machines on 'unstable'? I don't know - how many people are running glibc 2.1.92 now? How about X 4.0? GNAT 3.13? I'm running two out the three, because I'm too impatient to wait for unstable. Provided I'm not that much of an anomaly, unstable should be well tested; at the very least, brown paper bag bugs should get caught. -- David Starner
Re: Bug tracking system and testing distribution Re: Potato now stable
Joey Hess writes: Christoph Martin wrote: We have a problem with the bug tracking system as long as we can't really find out to which versions of a package a bug really applies. We only mosttimes have the version of the packages where a problem showed up. But we don't know if the bug was introduced with this version or also applies to older ones. And in the case of different distributions, if the bug was reported eg. for frozen we don't know if it also exists in newer versions which are allready in unstable. This is also a problem if a bug which is in one distribution (like frozen or stable) gets fixed in another (unstable). Another issue is, that some bugs only appear in special architectures (like hurd, or powerpc). We really need a way to specify exactly to which version a version applies. As long as we don't have this feature we can't really get the testing distribution to work. Well this is why bug reproducability is so important. I don't see how a magic bullet to fix this issue is at all possible though. So, what is the policy to do with a package for the testing distribution, if there is an important bug? Do you remove the package unconditionaly or do you try investigate (like in the rc buglist) if the bug really applies? C
Re: Bug tracking system and testing distribution Re: Potato now stable
Christoph Martin wrote: So, what is the policy to do with a package for the testing distribution, if there is an important bug? Do you remove the package unconditionaly or do you try investigate (like in the rc buglist) if the bug really applies? Well if I were AJ I would just mechanically assume critical bugs are really critical, placing the onus on the package maintainer or any other interested parties to correct the status if it happens to be wrong. -- see shy jo
Implementing testing (was: Re: Potato now stable)
Hello world, So, on -devel-announce, I mentioned: * New testing distribution This is a (mostly finished) project that will allow us to test out distribution by making it sludgey rather than frozen: that is, a new distribution is added between stable and unstable, that is regularly and automatically updated with new packages from unstable when they've had a little testing and now new RC bugs. (Anthony Towns; debian-devel) It's basically ready to be stuck in the archive now, as far as I can tell, but since it's not exactly a trivial change, it's probably time to discuss it a bit more. The basic idea, simplified immensely, is to address this problem: * Testing updates to frozen is suboptimal: updates go into incoming, wait there for a while, get added to frozen, we discover they introduce as many release critical bugs as they solve, rinse, repeat. The wait for a while part is particularly suboptimal, but without it, it's not really a freeze. The current way we do things is basically to build a new package, hope it works as advertised, and let people test it. If it doesn't work, we repeat as many times as necessary, or eventually just throw the package out. A better way to handle this, which I suspect everyone's just spontaneoulsy reinvented as the read the above, is to try to keep around a previous version of the package that was usable. That way if the new packages don't work, we can just keep the old one rather than having to throw it out entirely. That, essentially, is the point of the testing distribution: to contain a consistent set of the most recent believed-to-be-reliable packages. Some subheadings follow. Why call it testing? One thing that the freeze is really bad at is fixing normal bugs. The point of packages in testing is not that they should be perfect or bug-free, just that they should be usable. There's a lot of difference between what we'd like to release (0 bugs, many many features) and what we'll accept for release (~0.005 RC bugs :), and this is really where beta testing should fit in. It also sorts nicely compared to stable and unstable :) What does acceptable for release mean? For one thing, it means the packages are all consistent: if libgtk1.2.7 is in the distribution, none of the packages should be depending on libgtk1.2.8. For another, it means packages shouldn't have any release critical bugs. It also means a package should be at the same version across all architectures it's present in [0]. It also means the maintainer of the package should be relatively happy with it. It means the package shouldn't have any release critical bugs: that is, no security holes [1] (critical or grave), the package shouldn't crash your system (critical), it should be usable for someone on the planet at least (grave), and it shouldn't violate policy too severely, by having incorrect dependencies, or no copyright, eg [2] (important). Note that what I'm writing here is what I think's best, and what's implemented. If there's an objectively better way of doing things, well, that's why I'm posting. [3] Okay. So the next question you're probably asking yourselves is how does it work. Well, you don't have to ask yourself, you can ask me. Here's a summary. Archive Layout ~~ As package pools aren't close to being rolled out, I'm opting for as minor a change as possible (which isn't really very minor). So instead of two distributions, stable and unstable, we have three distributions, stable, testing and unstable. As usual packages get uploaded via dinstall to unstable, broken and buggy however they might be. Eventually, by some automated process yet to be described, they eventually get added to the testing distribution. After some amount of time testing gets frozen, fixed, and released (the theory being that this will be easier than freezing unstable, fixing it, and releasing). So basically we'd have: unstable-- bleeding edge, broken, etc testing -- leading edge, maybe buggy, but working stable -- static, usable, going out of date Automated Process? ~~ So pretty much all the policy is encoded in some automated process which updates testing. It works at the moment, basically as follows: 1. First, it loads up all the Sources and Packages files in testing and unstable. 2. It compares and contrasts them, working out what source packages are new in unstable. 3. For each of these new source packages it checks: a. That the package has had two weeks of testing, or it's a medium or high urgency package (and has had either one week, or three days of testing). b. That each binary
Re: Potato now stable
[EMAIL PROTECTED] (Jason Gunthorpe) wrote on 14.08.00 in [EMAIL PROTECTED]: On Mon, 14 Aug 2000, Joey Hess wrote: You know, if apt could only support Reccommends, task packages could be I don't care for this much, it breaks the model that apt-get follows, it Well, I'd *very very much* like apt-get to be able to do *something* with Recommends: and Suggests:. Currently, I either have to go to dselect just to see what Recommends: I'm missing, or else do some pretty incredible shell pipelines to handle Suggests: with apt-get. Not good. Now I can certainly accept that it'd be a bad thing to change apt-get's default behaviour, but that doesn't mean some reasonable support could not be done with some command line switches. I think the interesting functionality would be as follows: for (A) Recommends: or (B) Recommends:+Suggests:, for (i) a list of packages given on the command line or (ii) all installed packages, or (iii) all newly-to-install-or-upgrade packages (that is, recursively including packages which would be installed by a Recommends:/ Suggests:), list and optionally install those packages, the same way you'd do with extra Depends:. MfG Kai
Re: Potato now stable
Bas Zoetekouw [EMAIL PROTECTED] writes: I personally would like having hardware detection stuff in woody. Wouldn't it be great to have to install procedure ask you something like hi dude, I've detected that you've got a ne2000 NIC in your computer. Shall I load the appropriate module?? (and the same for video, sound, scsi, etc.) I noticed the other day that recent versions of RedHat use something called Kudzu (sp?) to do this. When I took out the network card, it warned me that some hardware was missing, and offered to change some things to compensate. Has anyone has looked into porting this to Debian?
Re: Potato now stable
Thus spake Colin Walters ([EMAIL PROTECTED]): I noticed the other day that recent versions of RedHat use something called Kudzu (sp?) to do this. When I took out the network card, it warned me that some hardware was missing, and offered to change some things to compensate. Has anyone has looked into porting this to Debian? Currently, in Debian it is being used by sndconfig. It was written specifically for Redhat though and does some things (like editing /etc/conf.modules, linking devices in /dev/) which are probably not desirable in Debian. The detection part can probably be used, though. Mandrake, too, includes a hardware detection libarary (libdetect). Some time ago, Dan Helfman [EMAIL PROTECTED] (Cc'ed him), was busy packaging it. Dan, have you had any luck yet adapting it to Debian? When a hardware detection library is available, I think I'm going to rewrite sndconfig specifically for Debian instead of editing the Redhat package. Maybe a more general program, which can detect and configure various kinds of hardware, should be created though. -- Kind regards, +---+ | Bas Zoetekouw | Si l'on sait exactement ce | || que l'on va faire, a quoi| | [EMAIL PROTECTED] | bon le faire?| | [EMAIL PROTECTED] | Pablo Picasso | +---+ pgpeTr3UwO5cT.pgp Description: PGP signature
Re: Potato now stable
On Wed, 16 Aug 2000, Bas Zoetekouw wrote: Has anyone has looked into porting this [Kudzu] to Debian? Mandrake, too, includes a hardware detection libarary (libdetect). Some time ago, Dan Helfman [EMAIL PROTECTED] (Cc'ed him), was busy packaging it. Dan, have you had any luck yet adapting it to Debian? I recall reading a few months ago about a plan to merge ALL of the existing hardware detection routines into one lump, in order to consolidate work and effort. The proposal was met with acceptance by many (if not all) of the major developers (Mandrake, Redhat, Suse, Turbo) You might want to do a search on LWN (www.lwn.net) or Linuxtoday, or elsewhere. I did a quick look and didn't find it, but I know I read about it. please post if you do find a link to it.
Re: Potato now stable
I recall reading a few months ago about a plan to merge ALL of the existing hardware detection routines into one lump, in order to consolidate work and effort. The proposal was met with acceptance by many (if not all) of the major developers (Mandrake, Redhat, Suse, Turbo) please post if you do find a link to it. reply to my own request: It was on this list I saw it (gee, how nice), and Dan Shearer ([EMAIL PROTECTED]) was organizing it. Wichert replied to his request for help and said he'd love to see this happen, and pointed Dan to boot-floppies as the group to work with for Debian. Dan replied with a long post quoting his plan and more. the thread was in May 2000, http://www.debian.org/List-Archives/debian-boot-0005/msg00471.html starts the thread and http://www.debian.org/List-Archives/debian-boot-0005/msg00482.html contains Dan's proposal
Re: Potato now stable
On Aug 16, Bas Zoetekouw wrote: Mandrake, too, includes a hardware detection libarary (libdetect). Some time ago, Dan Helfman [EMAIL PROTECTED] (Cc'ed him), was busy packaging it. Dan, have you had any luck yet adapting it to Debian? Dan has reasonably up-to-date packages of libdetect and HardDrake (Lothar) at http://torsion.org/witten/debian. I uploaded an earlier release of libdetect to Incoming as Dan's sponsor, but it's been languishing there for weeks. Chris
Re: Potato now stable
On Wed, Aug 16, 2000 at 08:46:38AM +0200, Bas Zoetekouw wrote: Thus spake Colin Walters ([EMAIL PROTECTED]): I noticed the other day that recent versions of RedHat use something called Kudzu (sp?) to do this. When I took out the network card, it warned me that some hardware was missing, and offered to change some things to compensate. Has anyone has looked into porting this to Debian? Currently, in Debian it is being used by sndconfig. It was written specifically for Redhat though and does some things (like editing /etc/conf.modules, linking devices in /dev/) which are probably not desirable in Debian. The detection part can probably be used, though. Mandrake, too, includes a hardware detection libarary (libdetect). Some time ago, Dan Helfman [EMAIL PROTECTED] (Cc'ed him), was busy packaging it. Dan, have you had any luck yet adapting it to Debian? Yup, libdetect required very little in the way of modification and works fairly well on Debian. Harddrake (formerly Lothar) is the GUI and console interface for libdetect. It handles autodetection and configuration. Harddrake did require some modifications to work with Debian's modutils, and those patches have now been integrated into the upstream source. When a hardware detection library is available, I think I'm going to rewrite sndconfig specifically for Debian instead of editing the Redhat package. Maybe a more general program, which can detect and configure various kinds of hardware, should be created though. You might try playing with Harddrake to see if it suits Debian's needs for this sort of thing. It has both Gtk and Newt interfaces. And by the way, Harddrake also has a kudzu mode that can be called from boot scripts. I haven't tried it out yet, but it's intended to do much the same thing as Redhat's kudzu. -- Kind regards, +---+ | Bas Zoetekouw | Si l'on sait exactement ce | || que l'on va faire, a quoi| | [EMAIL PROTECTED] | bon le faire?| | [EMAIL PROTECTED] | Pablo Picasso | +---+ -- Dan Helfman UCLA Linux Users Group: http://www.linux.ucla.edu My GnuPG key: http://torsion.org/witten/public-key.txt
Re: Bug tracking system and testing distribution Re: Potato now stable
Christoph Martin wrote: We have a problem with the bug tracking system as long as we can't really find out to which versions of a package a bug really applies. We only mosttimes have the version of the packages where a problem showed up. But we don't know if the bug was introduced with this version or also applies to older ones. And in the case of different distributions, if the bug was reported eg. for frozen we don't know if it also exists in newer versions which are allready in unstable. This is also a problem if a bug which is in one distribution (like frozen or stable) gets fixed in another (unstable). Another issue is, that some bugs only appear in special architectures (like hurd, or powerpc). We really need a way to specify exactly to which version a version applies. As long as we don't have this feature we can't really get the testing distribution to work. Well this is why bug reproducability is so important. I don't see how a magic bullet to fix this issue is at all possible though. -- see shy jo
Re: Potato now stable
Anthony Towns wrote: By omission, this does a fairly impressive injustice to everyone else who helped with development, testing, fixing bugs, documenting problems and work arounds, giving support, and everything else everyone's done in the past months, so, well, thanks everyone! Seconded! * Tasks are great, but task-* packages suck when some of the packages included have release critical bugs. (Remove the package, the entire task breaks) You know, if apt could only support Reccommends, task packages could be a lot saner. Sure, it'd still be ugly if something they depended on went missing, but at least they'd still be usable. I think apt could support reccommends like this: * Automatically install all reccommended packages when installing/upgrading a package. * If a package that something reccommended was manually removed, don't re-install it next time a package that reccomends it is installed. Of course whether this is doable is up to Jason.. More later, my battery is dying. -- see shy jo
Re: Potato now stable
On Mon, 14 Aug 2000, Joey Hess wrote: * Tasks are great, but task-* packages suck when some of the packages included have release critical bugs. (Remove the package, the entire task breaks) You know, if apt could only support Reccommends, task packages could be a lot saner. Sure, it'd still be ugly if something they depended on went missing, but at least they'd still be usable. I think apt could support reccommends like this: * Automatically install all reccommended packages when installing/upgrading a package. * If a package that something reccommended was manually removed, don't re-install it next time a package that reccomends it is installed. Of course whether this is doable is up to Jason.. I don't care for this much, it breaks the model that apt-get follows, it adds this extra variable of 'things that were removed' which can lead to subtle unexepected behavior. The way it is now the command line tool consistently ignores recommends/suggests, like dpkg. Higher level tools are free to do whatever they want. Tasks are bettered handled through some kind of non-package means. I've long said we need to determine some kind of meta-package scheme (a 'package' whose only purpose is to logically group other packages). Clearly the desired effect of all meta-packages is to provide the user with a single node to manipulate and view a group of packages. They should have special properties in any UI, you should be able to view and manipulate their grouped packages. Idillicly the grouping would have priorities of packages (ie -python doesn't need to install every freaking package, but some are definately critical) and the ability to track and optionally install new packages added to the group, remove the whole group, etc. All this data is orthogonal to the dependency structure. Perhaps if some thought is put into this a rational solution to the package splitting problem can be found (convert the old 'big' package into a meta-package before touching the original 'big' package - provides a simple and safe transition?) If you take this thought to it's logical extent then things like the important priority are mearly hard coded examples of this. Logically, the way to represent this is to have package declare their membership in a grouping. This could be done via the override file so as to maintain a centralized authority like we have no with the task packages. Groups and user preferences about them could be stored seperate to the status file. Jason
Re: Potato now stable
Jason Gunthorpe wrote: Tasks are bettered handled through some kind of non-package means. I've long said we need to determine some kind of meta-package scheme (a 'package' whose only purpose is to logically group other packages). How is introducing some basterdized form of package (perhaps it's just an entry in the Packages file or something), going to allow us to address problems like aj was talking about, where one of the things it depends on is removed from debian, and it needs to be updated? The problem, as I see it, is that task packages declare a strong dependency where often none really exists. After all, if it were a real dependancy, we'd not be having this discussion, since aj/james/whoever's course of action then would have been a lot more clear: remove both packages, or fix one. Thus, it still seems to me that allowing that to be weakened to a reccommends would be the ideal solution. Clearly the desired effect of all meta-packages is to provide the user with a single node to manipulate and view a group of packages. They should have special properties in any UI, you should be able to view and manipulate their grouped packages. Idillicly the grouping would have priorities of packages (ie -python doesn't need to install every freaking package, but some are definately critical) and the ability to track and optionally install new packages added to the group, remove the whole group, etc. I don't disagree that all this would be nice, but it seems like icing on a cake that's just hiding the nasty holes. Logically, the way to represent this is to have package declare their membership in a grouping. You know, we had this discussion already. Please see the list archives of this winter. We decided this was not the correct way to do it, because metapackages should be maintained by one person. Allowing anyone to add a reverse-dependency and get a package into a metapackage will result in metapackages that are ill-thought-out collections of stuff, without the guiding thought behind them that a real package, with a real maintainer, has. In other words, they would look something like the sections on our ftp site do now, but probably even less organized. Is that game in games/, or x/, or inexplicably, in networking/? :-P Compare with task-games. I have put a *lot* of thought into what goes into that package. If it did not have one single maintainer, with a coherent vision, it would be a random set of games, probably eventually growing to include a large portion of the games in debian. Which would defeat its purpose. -- see shy jo
Re: Potato now stable
On Mon, Aug 14, 2000 at 10:55:59PM -0600, Jason Gunthorpe wrote: Clearly the desired effect of all meta-packages is to provide the user with a single node to manipulate and view a group of packages. They should have special properties in any UI, you should be able to view and manipulate their grouped packages. Idillicly the grouping would have priorities of packages (ie -python doesn't need to install every freaking package, but some are definately critical) and the ability to track and optionally install new packages added to the group, remove the whole group, etc. Under the Irix packaging system (quite nice UI except that it has to handle Irix packages..) packages exist in a hierarchy, with lowest level packages quite fine grained. For example: I fw_bzip2 02/28/2000 bzip2-0.9.0c Compress/decompress files I fw_bzip2.man 02/28/2000 bzip2-0.9.0c man pages I fw_bzip2.man.bzip2 02/28/2000 bzip2-0.9.0c man pages I fw_bzip2.man.info02/28/2000 bzip2-0.9.0c info pages I fw_bzip2.man.relnotes 02/28/2000 bzip2-0.9.0c Release Notes I fw_bzip2.sw 02/28/2000 bzip2-0.9.0c execution only env I fw_bzip2.sw.bzip202/28/2000 bzip2-0.9.0c execution only env I fw_bzip2.sw.hdr 02/28/2000 bzip2-0.9.0c header files I fw_bzip2.sw.lib 02/28/2000 bzip2-0.9.0c shared libraries I fw_bzip2.sw6402/28/2000 bzip2-0.9.0c 64-bit execution only env I fw_bzip2.sw64.lib02/28/2000 bzip2-0.9.0c 64-bit shared libs Some of these are marked for default installation if the top level (fw_bzip2) is selected. Others are non-default. Still others (the sw64's) are only installed by default if the hardware they're being installed on is capable of running them. You can unfold the selection and mark components (full fledged packages themselves) for install if you really want to, but most times the defaults are fine. You can use wildcards for instance to delete all info page packages without going beyond the package system. We could probably use something similar to handle localization and multiple architectures. Many of our packages are already hierarchical ( x-dev, x-doc, libx, ...), but not in a formal way that the user interface can use to shorten the list a user sees. -Drake
Re: Potato now stable
Drake Diedrich wrote: Under the Irix packaging system (quite nice UI except that it has to handle Irix packages..) packages exist in a hierarchy, with lowest level packages quite fine grained. Wow, I quite like this. How could we do it? -- see shy jo
Re: Potato now stable
On Mon, Aug 14, 2000 at 10:12:38PM -0700, Joey Hess wrote: The problem, as I see it, is that task packages declare a strong dependency where often none really exists. After all, if it were a real dependancy, we'd not be having this discussion, since aj/james/whoever's course of action then would have been a lot more clear: remove both packages, or fix one. Thus, it still seems to me that allowing that to be weakened to a reccommends would be the ideal solution. What I'd like to happen is basically be able to remove the package, and just have the task automatically act as though that package had never existed. Not complain in dselect about it, not worry people when Apt gives you a warning, not do anything. One way of doing this would be to include the information in the package's entry in the Packages file, a la: Package: foo Version: 2.71828 Depends: libc6 Task: metasyntactical-packages If the package is removed from the distribution, it's implicitly no longer in the task. The problem with doing it this way, is you can't easily make a aj-favourites task separate from the distribution that includes things from the distribution. The only way around this I can think of is to make a task-aj-favourites package that has a Task: aj-favourites field and Depends: on the packages from the appropriate distribution. This may be inelegant. It's not a problem within Debian, though (but the current way of doing things is). Another way of doing might be to generate task packages as we have now as part of dinstall, and install them into the archive. Another way would be to not do this as part of dinstall, but on an autobuilder. This wouldn't be as effective if there's a security update that needs to get out immediately though. Compare with task-games. I have put a *lot* of thought into what goes into that package. If it did not have one single maintainer, with a coherent vision, it would be a random set of games, probably eventually growing to include a large portion of the games in debian. Which would defeat its purpose. How we make this available to apt, or frontends doesn't have to influence how we actually maintain it. Consider, for example: #!/usr/bin/perl -w my %pkgtasks = (); # $pkgtasks{netbase} = [ networking ]; eg while(STDIN) { chomp; next unless m/^(\S+)\s+(\S+)$/; # task package push @{$pkgtasks{$2}}, $1; } while() { chomp; if (m/^Package: (\S+)$/) { my $cpkg = $1; print Package: $cpkg\n; print Task: . join(, , @{$pkgtasks{$cpkg}}) . \n if (defined $pkgtasks{$cpkg}); } elsif (m/^Task:.*$/) { # discard } else { print $_\n; } } Used to add Task: fields to a Packages file, something like: $ cat tasks networking netbase metasyntactical-packages foo $ ./add-task /var/lib/dpkg/available tasks | less Cheers, aj -- Anthony Towns [EMAIL PROTECTED] http://azure.humbug.org.au/~aj/ I don't speak for anyone save myself. GPG signed mail preferred. ``We reject: kings, presidents, and voting. We believe in: rough consensus and working code.'' -- Dave Clark pgpIkFGeB6zoQ.pgp Description: PGP signature
Re: Potato now stable
On Mon, 14 Aug 2000, Joey Hess wrote: Jason Gunthorpe wrote: Tasks are bettered handled through some kind of non-package means. I've long said we need to determine some kind of meta-package scheme (a 'package' whose only purpose is to logically group other packages). How is introducing some basterdized form of package (perhaps it's just an entry in the Packages file or something), going to allow us to address problems like aj was talking about, where one of the things it depends on is removed from debian, and it needs to be updated? You already have a bastadized form of packages, thats what a task package is! The reason there are problems is specificly because task-packages *aren't really packages* and we don't have enough expressiveness in our packaging system to make them really work in a good way. [nor should we, IMHO] Trying to put hack upon hack into the package tools to support magic-special packages in a limited fashion does not seem to be a good solution because: 1) They are not packages! 2) You will never get everything you want because you are treating specialized data in a generic way The exact problem AJ is talking about is easially handled when you no longer have task packages because suddenly there are no more dependencies, you have a grouping which can be as strong or weak as the user+packager desires. Your suggestion would work to solve AJ's problem, but it suddenly makes apt-get act really damn weird. You now have a black list of packages which are hidden from recommends. This black list can't be updated if someone uses dpkg because it doesn't know about it, and there is not really a super-good way to edit it and it doesn't buy you anything in terms of ease of use and organization. I suspect the model APT guis, and perhaps apt-get too, will use for recommends will be a white list where specific packages have their recommends and suggests promoted to depends under user control. That list can be fully maintained safely within APT and matches the familiar model that dselect uses. (pull stuff in, don't exclude stuff out) We also already have the concept of groups (priority/section), our users are familiar with it - we even have automatic groups ala task-packages (priority=important). So why not enhance that and create something really spanky? priorities of packages (ie -python doesn't need to install every freaking package, but some are definately critical) and the ability to track and optionally install new packages added to the group, remove the whole group, etc. I don't disagree that all this would be nice, but it seems like icing on a cake that's just hiding the nasty holes. Eh? That's completely unreasonable - the entire point is that expressing groupings using the dependency mechanism has severe drawbacks, you have to get away from that - you can't consider anything else as full of holes and expect to fix any of the drawbacks! Logically, the way to represent this is to have package declare their membership in a grouping. You know, we had this discussion already. Please see the list archives of this winter. We decided this was not the correct way to do it, I'm well aware of that - and that has zippo to do with delivery of the data. We already have the ability to override sections and priority, groups are not a big streatch. Inling group membership with each package is a good way to deliver this data without making major changes to the delivery system, another option is to throw another index file in the archive or somehow abuse the content of the Package file. But the best option from a modeling viewpoint is to have packages be members of groups, not have groups with packages in them. Jason
Re: Potato now stable
On Mon, 14 Aug 2000, Joey Hess wrote: Drake Diedrich wrote: Under the Irix packaging system (quite nice UI except that it has to handle Irix packages..) packages exist in a hierarchy, with lowest level packages quite fine grained. Wow, I quite like this. How could we do it? This is the ultimate in micropackaging - doing something like that would solve so many different requests in one big *splat*. We could have sparc32/64 binaries, PIII optimized binaries, systems without /usr/doc, etc. Off hand, I would suspect you'd take an arbitary .deb and carve it into sub packages internally - this is for effeciency.. Other debs can come along and clealy install over the sub packages. Ex: You have apt_1.1_i386.deb which contains 'doc' 'binary' And an apt_1.1_i686_bin.deb which just have 'binary' Inside Package tools would sort that out through some magic means.. Of course this is all just off hand... : Jason
Re: Potato now stable
Anthony Towns wrote: Another way of doing might be to generate task packages as we have now as part of dinstall, and install them into the archive. Another way would be to not do this as part of dinstall, but on an autobuilder. Well, if you're going to do that, what's stopping you from pulling the task's source package and NMUing? I don't really see the difference. How we make this available to apt, or frontends doesn't have to influence how we actually maintain it. Ok, _this_ I can accept. -- see shy jo
Re: Potato now stable
Jason Gunthorpe wrote: Off hand, I would suspect you'd take an arbitary .deb and carve it into sub packages internally - this is for effeciency.. Other debs can come along and clealy install over the sub packages. Ex: You have apt_1.1_i386.deb which contains 'doc' 'binary' And an apt_1.1_i686_bin.deb which just have 'binary' Inside Package tools would sort that out through some magic means.. Perhaps these sub-packages would be additional files in the ar file. Perhaps those files themselves should be in .deb format? Then we have sub package nesting and meta-data too Of course this is all just off hand... : Same. -- see shy jo
Re: Potato now stable
Joey Hess [EMAIL PROTECTED] writes: Jason Gunthorpe wrote: Tasks are bettered handled through some kind of non-package means. I've long said we need to determine some kind of meta-package scheme (a 'package' whose only purpose is to logically group other packages). How is introducing some basterdized form of package (perhaps it's just an entry in the Packages file or something), going to allow us to address problems like aj was talking about, where one of the things it depends on is removed from debian, and it needs to be updated? in the one bit you trimmed out, Jason said: Logically, the way to represent this is to have package declare their membership in a grouping. This could be done via the override file so as to maintain a centralized authority like we have no with the task packages. Groups and user preferences about them could be stored seperate to the status file. This wouldn't be that difficult. Just add a 'Task:' field to the packages. Have the default be non-existant (empty). In order to add information to the overrides file (and not put the load on the ftp people's shoulders) have a 'maintained overrides', that is, a bit of the overrides file maintianed just like a normal package (e.g., task-games.overrides). In this way you satisfy aj's concerns (changing this would be as short as editing a text file, signing and uploading) and provide the functionality of task-packages, provided UI tools support this field. One problem here is that sooner or later someone will start thinking of such sick things as 'local overrides'. Marcelo
Re: Potato now stable
Marcelo E. Magallon wrote: in the one bit you trimmed out, Jason said: Er, no, I did not ignore that, nor did I trim all of it. -- see shy jo
Re: Potato now stable
On Mon, Aug 14, 2000 at 11:08:59PM -0700, Joey Hess wrote: Perhaps these sub-packages would be additional files in the ar file. Perhaps those files themselves should be in .deb format? Then we have sub package nesting and meta-data too Of course this is all just off hand... : Same. Irix does that with it's packages: multiple micropackages in one file. Actually, you can also get jumbo packages, just tarballs (tardist) with lots of packages inside them, each package with lots of micropackages. Beyond the modifications to the .deb format we'd also need to modify the way dpkg stores package information. ls /var/lib/dpkg/info just took 30 seconds on my machine. Each package has several files in there. At 4K and 1 inode apiece, the dpkg overhead is even higher than the overhead for microdebs all in separate files. Tasks wouldn't be well served by jumbo packages - there'd still have to be some support for a hierarchy beyond the .deb layer, so the hierarchy support should probably not be related to the actual file handling at all. Maybe .debs could simply contain other, simpler .debs. That should be fairly easy to unpack with existing dpkg functions (whatever extracts streams from the ar format), and would allow an infinitely deep and arbitrarily broken-into-separate-files division. Signed .debs are probably an issue. -Drake
Re: Potato now stable
On Tue, 15 Aug 2000, Anthony Towns wrote: What I'd like to happen is basically be able to remove the package, and just have the task automatically act as though that package had never existed. Not complain in dselect about it, not worry people when Apt gives you a warning, not do anything. Well, this is what I was trying to say before - logically it makes alot of sense if packages are members of groups, this is the reverse of what we have now - a list of packages in a group. Delivery and storage of this data has *lots* of options.. Let me outline more clearly how I think task packages should work from a users POV: The user should see a list of groups (I will call them this because I think groupings can be more general than just tasks). The UI tool will allow sorting and searching of the groups and when browsing individual packages it will be possible to see what groups they are part of. The user can select that a group is of interest to them and mark it for 'installation'. Once done this means all packages currently in the group will be installed and all new packages added to the group in future will be installed. The UI tool will track when new packages are added to groups and present that information in conjunction with the traditional new packages display. A tree-like display can be used to show what packages are part of a group and allow individual selection. Since some groups are quite large it may make sense to categorize the packages lists into finer subgroups (primarily to help the user navitagate around, but they could be seperate at the top level too) that can all be individually selected for install. [Example: task-python-critical, task-python-web, task-python-gui] Since there is a tree like display the user can pick off individual sub-packages of the group, which would now serve nicely as an oganizational tool. Packages may belong to many groups and appear in multiple places in this tree - again for organiation. Important/standard/etc priorities would become mega-groups, most people would run with important and standard set to install - [like dselect does], but this becomes optional - and much more controlled. I can see that blacklisting within a group may be useful on a limited scale. The blacklist would be expressed as 'packages a1,a2.. in group b are not to be installed, but the rest of b is' which allows undesired components to be eliminated by the user. Most groups should be designed to minimize this, hence this is primarily aimed at the mega-groups rather than smaller ones. (This is a similar, but stronger statement than your original proposal - not automatic either) So now we can bring organization in on a grand scale. I can envision task package groups that are like we have now, small very focused things, priority groups which reflect the standard UNIX view of a system, and new kinds of purely organization groups (how about a gnome mega-group?). We could bring some sanity to the section arrangement by having things be part of multiple sections, and provide stronger guidelines and more sections. And if you recall what I said in my last message about recommends - take this same concept and apply it to a 'micro-group' of a single package (where recommends and suggests form sub groups) and you have a simple understandable concept that can be applied and used for about 5 different things! In my book that a good thing! If we can work out the details I think an idea like this could help in *alot* of areas, and is not really super complicated for us to deploy! Jason
Re: Potato now stable
Thus spake Anthony Towns (aj@azure.humbug.org.au): Once we get to woody, though, there are probably two things that are particularly worthwhile doing. As per usual, we should probably have a few weeks discussing release goals for woody to see what sort of direction we want to head (and then going ahead and implementing whatever we feel like anyway). I personally would like having hardware detection stuff in woody. Wouldn't it be great to have to install procedure ask you something like hi dude, I've detected that you've got a ne2000 NIC in your computer. Shall I load the appropriate module?? (and the same for video, sound, scsi, etc.) -- Kind regards, +---+ | Bas Zoetekouw | Si l'on sait exactement ce | || que l'on va faire, a quoi| | [EMAIL PROTECTED] | bon le faire?| | [EMAIL PROTECTED] | Pablo Picasso | +---+
Bug tracking system and testing distribution Re: Potato now stable
Anthony Towns writes: * Working out which bugs are really release-critical and fixing their severity so we know where we're at is overly time consuming. We have a problem with the bug tracking system as long as we can't really find out to which versions of a package a bug really applies. We only mosttimes have the version of the packages where a problem showed up. But we don't know if the bug was introduced with this version or also applies to older ones. And in the case of different distributions, if the bug was reported eg. for frozen we don't know if it also exists in newer versions which are allready in unstable. This is also a problem if a bug which is in one distribution (like frozen or stable) gets fixed in another (unstable). Another issue is, that some bugs only appear in special architectures (like hurd, or powerpc). We really need a way to specify exactly to which version a version applies. As long as we don't have this feature we can't really get the testing distribution to work. * New testing distribution This is a (mostly finished) project that will allow us to test out distribution by making it sludgey rather than frozen: that is, a new distribution is added between stable and unstable, that is regularly and automatically updated with new packages from unstable when they've had a little testing and now new RC bugs. (Anthony Towns; debian-devel) Some people who met a the Useenix Debian BOF were discussion this issue. And because I was not aware of your work on testing I tried to build my own solution as a proof of concept. I set up a server at ftp://lucy.verwaltung.uni-mainz.de/pub/debian-local/ which has a stabilized distribution. This is constructed out of potatos main part (and can be easily enhanced by contrib etc.). Only packages which are in potato longer than (the arbitrary value of) 14 days get into stabalized. The packages must also have correct dependencies and no important or more severe bugs in the bug tracking older than 14 days. The scripts are working quite well, but only than I found out about the problem with the bug tracking system. Christoph
Re: Potato now stable
Today, Jason Gunthorpe [EMAIL PROTECTED] wrote: The user should see a list of groups (I will call them this because I think groupings can be more general than just tasks). The UI tool will allow sorting and searching of the groups and when browsing individual packages it will be possible to see what groups they are part of. The user can select that a group is of interest to them and mark it for 'installation'. Once done this means all packages currently in the group will be installed and all new packages added to the group in future will be installed. The UI tool will track when new packages are added to groups and present that information in conjunction with the traditional new packages display. A tree-like display can be used to show what packages are part of a group and allow individual selection. Since some groups are quite large it may make sense to categorize the packages lists into finer subgroups (primarily to help the user navitagate around, but they could be seperate at the top level too) that can all be individually selected for install. [Example: task-python-critical, task-python-web, task-python-gui] Hmm. I wonder if something like Keywords would help with that. For example, emacs fits into many a description, say Editor (understatement), Desktop and Development/IDE (sounds right to me). Now, returning to tasks, if the user chose to use his computer as a development environment, he could choose Emacs as his IDE. Also, this would speed up searching a lot. When, for example, I want to install a Tetris-like game, I could just search for the keyword Games/Tetris and get every tetris-like game in the distribution, and not only a substring match of some part of the description. This, of course, would require the maintainers to have some sort of discipline with the chosing of keywords for their packages, I think every part of the program should be mentioned to make grouping the packages and judging the package's value easier. I presume that with keywords, a tree-like display with groups and sub-groups could be done, but only where one can say this thing belongs down there, as with Development/IDE, or Games/Tetris. Tetris is a game, not a physics package, so it should be in the Games group. I think that many a thought will have to go into the right naming of the groups, and their sub-categories. Also, keep in mind that a package could be in many groups and sub-groups at once, and therefore should be mentioned more than once in the package managing UI. If the user opens the category expert editors, Emacs will show up, just as it does when the user opens the category IDEs in Development. This does not, in itself, solve the problem that ajt described. Therefore, I think that we should also assign a certain weight to a sub-category, as well as to a package in that category. This could then look like this: control pkg=emacs Keywords: Desktop(0), Editors/Expert(20), Development/IDE(15) /control Then, for every category and sub-category, there should be a package (or something else, I can't think of anything else in the moment), which has a control field Category: weight set, to make it a group and to assign a certain weight to the sub-category. Top-level-categories should have 0, I guess. The top-level categories would then be what is a task now, while the sub-categories would allow the user to refine his selection until he reaches the package level. Jason And we could let it have a telepathic user interface, and have it speed up the internet by 2000%, and have it end world hunger, and ... Anyway. regards, -- Andreas Stefan Fuchs in Real Life aka [EMAIL PROTECTED], [EMAIL PROTECTED], [EMAIL PROTECTED] in NNTP and SMTP, antifuchsin IRCNet and Relf Herbstfresser, Male 1/2 Elf Priest in ADD
Re: Potato now stable
[Please followup to -devel, since I do not yet subscribe to -boot] On Tue, Aug 15, 2000 at 10:06:10AM +0200, Bas Zoetekouw wrote: I personally would like having hardware detection stuff in woody. This is something Progeny is attempting to accomplish for our version of Debian later this year. Ian Murdock has spent some time looking into it. Of course, everything Progeny does will be made available to the community, particularly the Debian community. This seems like a good time to ask; how would the boot-floppies team feel about creating a branch in CVS for Progeny? There are probably some things we (Progeny) will do that Debian doesn't want to fool with. We can of course set up our own CVS repository if need be, but this seems a good way to do development out in the open. -- G. Branden Robinson | I just wanted to see what it looked like Debian GNU/Linux| in a spotlight. [EMAIL PROTECTED] | -- Jim Morrison http://www.debian.org/~branden/ | pgpDnnvGIZcyy.pgp Description: PGP signature