Re: extensive patching
On Fri, May 16, 2008 at 11:26:01AM -0700, Don Armstrong wrote: > On Fri, 16 May 2008, Martin Uecker wrote: > > > Requiring distro specific changes feels wrong anyway. Software > > should be coupled by standardized interfaces. But I might be naive > > here. What are the distro specific changes we are talking about? > > It'd be great[0] if we never had to do distribution specific > changes.[1] However, considering the amount of software which is not > LSB compliant, FHS compliant, policy compliant, ships internal > libraries, has upstreams who don't understand API and ABIs, has slow > release cycles, has insane upstreams, or otherwise includes bugs which > need to be fixed, that'll only rarely be the case for some very simple > packages. [...] > 1: One could argue that if you can't come up with a relatively large > list of distribution specific changes that need to be made yourself, > you've not done the research to make useful suggestions for radically > altering how Debian actually does development. Knowing the problem > comes before the knowing answer. Because LBS, FHS, APIs and ABIs, slow release cycles, insane upstreams and bugs are Debian specific issues? I don't think so. signature.asc Description: Digital signature
Re: extensive patching
On Fri, 16 May 2008, Martin Uecker wrote: > Requiring distro specific changes feels wrong anyway. Software > should be coupled by standardized interfaces. But I might be naive > here. What are the distro specific changes we are talking about? It'd be great[0] if we never had to do distribution specific changes.[1] However, considering the amount of software which is not LSB compliant, FHS compliant, policy compliant, ships internal libraries, has upstreams who don't understand API and ABIs, has slow release cycles, has insane upstreams, or otherwise includes bugs which need to be fixed, that'll only rarely be the case for some very simple packages. Even so, most developers and maintainers actively work to reduce the size of the diff.gz that they ship by sending patches upstream, if for no other reason than doing so means that they don't have to deal with merging back in Debian specific patches later. Those who are concerned about what happened in the ssl case are welcome and encouraged to assist maintainers in examining the patches made to software, and liasing with upstream for useful patches, and discussing questionable packages. [Use Luciano as an example: he actually found a mistake while those of us discussing this thread engage the barn door.] At the end of the day, we're here to make the most technically excellent distribution we can make. That means making changes, and sometimes we make mistakes. Finding and fixing those mistakes and spreading the changes to everyone is what we should be doing. Don Armstrong 0: We could just ship a universal diff.gz that installed a very simple debian/rules file that called dh, and we could spend the rest of our time making macros, drinking arrak, and playing tetrinet! 1: One could argue that if you can't come up with a relatively large list of distribution specific changes that need to be made yourself, you've not done the research to make useful suggestions for radically altering how Debian actually does development. Knowing the problem comes before the knowing answer. -- No amount of force can control a free man, a man whose mind is free [...] You can't conquer a free man; the most you can do is kill him. -- Robert Heinlein _Revolt in 2010_ p54 http://www.donarmstrong.com http://rzlab.ucr.edu -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: extensive patching
Hi Dne Fri, 16 May 2008 20:59:44 +0200 Martin Uecker <[EMAIL PROTECTED]> napsal(a): > I don't see a patch there. This might sound like nitpicking, but > a real patch would have provided some context to the two lines. Yes there is no context, but it is patch and it is clear that it wants to remove two lines of code. Today all we know that only one of them was relevant to that problem... > Nevertheless, the right thing in my opinion would have been to > propose a patch, wait until it is accepted, and then to package > the new upstream version. Well the problem is that you always don't get response, you simply have to include some patches before they are reviewed/accepted by upstream. If you're lucky, upstream is responsive and you can push all patches directly to them, but from my experience[*] it many times does not work. [*]: I don't have much patched my Debian packages (well I'm upstream for many of them), but I was working for SUSE about 3 years ago and many of patches which were really needed for packaging are still waiting for even review in upstream trackers. You simply can not build distribution without not reviewed patches. -- Michal Čihař | http://cihar.com | http://blog.cihar.com signature.asc Description: PGP signature
Re: extensive patching
On 11387 March 1977, Martin Uecker wrote: > Nevertheless, the right thing in my opinion would have been to > propose a patch, wait until it is accepted, and then to package > the new upstream version. If you want that - build an own distribution. Or well - an LFS. Because thats *not* what a distribution is about. If we stop fixing stuff ourself we can just stop building our own distribution. -- bye, Joerg AM: Whats the best way to find out if your debian/copyright is correct? NM: Upload package into the NEW queue. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]
Re: extensive patching
Hi Martin, I'm afraid this will be my last remark in this thread (do I hear cheers from the crowd?) since I really should go do something more productive now :-) Thanks for keeping the tone of discourse civil -- clearly this is a subject you feel strongly about, and the problem that started the thread frazzled all our nerves. Martin Uecker wrote: > Barry deFreese wrote: >> Which brings up at least two issues. Upstream not wanting the patches >> or dead upstream. Speaking from the games team alone I would bet that >> 50% or more of the packages have no upstream anymore. Should those >> packages be removed? > > If upstream is dead or unable to do his job well, somebody should fork > the project (or take ownership). But this has nothing to do with > packaging software and should in my opinion not be intermixed. [snip] > Fork it. But not as part of the packaging work. It's easy to say "somebody should fork it." But not enough people have time or resources to guarantee a new upstream for every dead project (or project with a bad upstream) worth packaging. For an example, after I was no longer able to serve as a good upstream for wmakerconf (since I stopped using Window Maker), it was six months before someone else volunteered to take it over [1]. He made a single upstream release back in April 2007 and then also had to abandon it. Only today did someone (me) even get around to uploading that new release into Debian. And wmakerconf is not a very obscure package -- it has 797 installations in popcon [2], an installation rate of better than 1% among systems reporting to popcon. [1] http://bugs.debian.org/290350 [2] http://qa.debian.org/popcon.php?package=wmakerconf Maintainership in a caretaker mode, building up a large set of patches to keep things working, is often the best that can be expected. But this is a lot better than nothing! The larger the project, the more this is so. Time is unfortunately a scarce commodity in the community. Responding to some of your more recent email: > "Kevin B. McCarty" <[EMAIL PROTECTED]> wrote: >> Martin Uecker wrote: >> At least for the example of my packages that I brought up, if I could >> not make an extensive set of patches, it is unlikely that the software >> could have met the policy and quality standards to be accepted into >> Debian. Whether it's better for Debian to ship heavily-patched software >> (that is still quite popular in the physics community) from a dead >> upstream, or not to ship it at all (forcing users to download it on >> their own from upstream's web site, then find and apply some set of >> patches grabbed from elsewhere on the web [2,3], then going through a >> baroque and obsolete build procedure [4]) is of course open for debate. >> You can guess that I hold the former of these opinions. > > Surely, this is very valuable work and I am not implying > at all that you should stop it. But if upstream is dead, then their is > no reason not to step in and simply take ownership of the package. > Traditionally, if upstream was dead, somebody formally declared > ownership of the software and took over development. I think this > is the right thing to do, because then there is a new upstream where > all other work can be shared. I believe that declaring one is the new upstream of a software means taking on a *much* greater responsibility than being the Debian maintainer of a package of that software. In the case of Cernlib and PAW, they are venerable (i.e. obsolete) FORTRAN-based code that, with a big effort, can be forced to work and be policy-compliant on modern Linux systems with gfortran. Among physicists, who are mostly amazingly conservative with respect to software, they still have a following. As Debian maintainer, I only have to care about, and fix bugs for, people using the software on modern Linux/gfortran systems that I'm familiar with. As an upstream maintainer, I would have to care about: - people wanting support for obsolete platforms like HP-UX or SunOS (there are still lots of those old workstations around!) - people wanting support for platforms I don't want to care about, like Cygwin or Mac OS X - people wanting support for proprietary FORTRAN compilers - ensuring that the code works on new platforms (the build system is based on Imake, it is a nightmare!) - future-proofing by porting to autotools, and rewriting lots of code that assumes sizeof(void *) == sizeof(int) and only works on 64-bit platforms by use of ugly hacks And then there is not just the software itself, but also all the project infrastructure which would be expected if a new upstream took over: web pages, online documentation, upstream bug tracking system, mailing lists ... I do not have anything close to the resources that would be needed to do all that for a project the size of Cernlib, it would take a good-sized team of people. > If upstream is incompetent, that somebody can step in and fork > the software. Again, with a clear
Re: extensive patching
Hi Michal! > Martin Uecker <[EMAIL PROTECTED]> wrote: > > > Upstream answered that it is okay too remove the seeding of the PRNG > > with uninitialized memory, but the concrete patch which additionally and > > erranously removed all seeding was never posted on openssl-dev. > > Are you sure? > http://thread.gmane.org/gmane.comp.encryption.openssl.devel/10917 I don't see a patch there. This might sound like nitpicking, but a real patch would have provided some context to the two lines. Nevertheless, the right thing in my opinion would have been to propose a patch, wait until it is accepted, and then to package the new upstream version. Regards, Martin signature.asc Description: Digital signature
Re: extensive patching
Hi On Fri, 16 May 2008 19:28:52 +0200 Martin Uecker <[EMAIL PROTECTED]> wrote: > Upstream answered that it is okay too remove the seeding of the PRNG > with uninitialized memory, but the concrete patch which additionally and > erranously removed all seeding was never posted on openssl-dev. Are you sure? http://thread.gmane.org/gmane.comp.encryption.openssl.devel/10917 -- Michal Čihař | http://cihar.com | http://blog.cihar.com signature.asc Description: PGP signature
Re: extensive patching
Barry deFreese wrote: [...] > > Buggy patches happen all the time. The question is, how could > > something as bad as this slip through? And one important > > reason is IMHO, that splitting up the development/bug fixing/review > > by creating different software branches is bad. > > Different software branches in what respect? Just by nature of > having a distro "package" ? By having a large diff against the upstream source with changes unrelated to packaging. [...] > > Clearly, Debian adds value by its patches. If those patches would be > > integrated upstream, then the whole free software community would > > benefit. > > Which brings up at least two issues. Upstream not wanting the patches > or dead upstream. Speaking from the games team alone I would bet that > 50% or more of the packages have no upstream anymore. Should those > packages be removed? If upstream is dead or unable to do his job well, somebody should fork the project (or take ownership). But this has nothing to do with packaging software and should in my opinion not be intermixed. > Also, obviously, there are changes that make no sense to > upstream that are strictly distro specific. Requiring distro specific changes feels wrong anyway. Software should be coupled by standardized interfaces. But I might be naive here. What are the distro specific changes we are talking about? > Also, I don't think we should always wait for upstream's > new releases for adding them if we have them available. > It might depend on every case. I think there should be a policy. I propose: > > I would prefer if only security fixes and bugs which might cause > > data loss would fixed directly in Debian. Everything else should > > go upstream first. > > Sounds good but again, what about unresponsive/dead upstreams. Do you > leave your users to "suffer" ? Is Debian here to service the user > community > or not? Fork it. But not as part of the packaging work. > > > Maybe there's a problem with the fact that some of those patches > > > are just reviewed by just one person, but then again, I seriously > > > think that it would have been quite difficult to discover that > > > there was a problem with this one. The proof that it wasn't > > > evident is not only that upstream didn't see the problem either, > > > nor any other developer or derivative distribution or independent > > > reviewers in 2 years. > > > > Did you look at the code? This was not exactly a deeply hidden flaw > > in some obscure looking code. Upstream didn't see the patch. That's > > exactly the problem. And I doubt that there was any review of this > > code in all this 2 years. > > I have seen links where "upstream" was asked about/notified of the > patch so this isn't an entirely true statement. Egos play a big part > in all of this as well. Upstream answered that it is okay too remove the seeding of the PRNG with uninitialized memory, but the concrete patch which additionally and erranously removed all seeding was never posted on openssl-dev. Regards, Martin signature.asc Description: Digital signature
Re: extensive patching
Martin Uecker wrote: [EMAIL PROTECTED] wrote: I disagree. The cause of the disaster was not that Debian does its own patching, but the fact that that patch was buggy. Buggy patches happen all the time. The question is, how could something as bad as this slip through? And one important reason is IMHO, that splitting up the development/bug fixing/review by creating different software branches is bad. Different software branches in what respect? Just by nature of having a distro "package" ? On the whole I think that Debian benefits a lot from custom patches, and in fact many packages would be severely buggy and/or wouldn't integrate properly with the rest of the system without them. > It's not a secret that many projects benefit from Debian patches, so there might be something good with them. Clearly, Debian adds value by its patches. If those patches would be integrated upstream, then the whole free software community would benefit. Which brings up at least two issues. Upstream not wanting the patches or dead upstream. Speaking from the games team alone I would bet that 50% or more of the packages have no upstream anymore. Should those packages be removed? Also, obviously, there are changes that make no sense to upstream that are strictly distro specific. Also, I don't think we should always wait for upstream's new releases for adding them if we have them available. It might depend on every case. I would prefer if only security fixes and bugs which might cause data loss would fixed directly in Debian. Everything else should go upstream first. Sounds good but again, what about unresponsive/dead upstreams. Do you leave your users to "suffer" ? Is Debian here to service the user community or not? Maybe there's a problem with the fact that some of those patches are just reviewed by just one person, but then again, I seriously think that it would have been quite difficult to discover that there was a problem with this one. The proof that it wasn't evident is not only that upstream didn't see the problem either, nor any other developer or derivative distribution or independent reviewers in 2 years. Did you look at the code? This was not exactly a deeply hidden flaw in some obscure looking code. Upstream didn't see the patch. That's exactly the problem. And I doubt that there was any review of this code in all this 2 years. I have seen links where "upstream" was asked about/notified of the patch so this isn't an entirely true statement. Egos play a big part in all of this as well. Of course, the development and checking of the patches should be done as cooperatively with upstream as possible, as upstream might see something we're not seeing, but the way to the solution, in my opinion, is not to avoid patching but to develop a way to check them as extensively as possible. Checking something extensively is much easier if there is one canonical branch which everybody agrees on. Sounds like Utopia but I can't see it happening. Regards, Martin Barry deFreese -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of "unsubscribe". Trouble? Contact [EMAIL PROTECTED]