Problem with Google APT repo
Weird. I'm getting this: Err:1 http://dl.google.com/linux/talkplugin/deb stable/main amd64 google-talkplugin amd64 5.41.3.0-1 Hash Sum mismatch But as far as I can see the file I get at that URL from my browser does in fact match the md5sum and sha1 in the package description. As far as I can tell this either means there's a bug in APT or there's a bug in the web service that is giving different content depending on whether it's APT downloading or my web browser when I download the file manually. # sha1sum google-talkplugin_5.41.3.0-1_amd64.deb 0bbc3d6997ba22ce712d93e5bc336c894b54fc81 google-talkplugin_5.41.3.0-1_amd64.deb # md5sum google-talkplugin_5.41.3.0-1_amd64.deb 03ea81590baa680d286d28533c4d40e1 google-talkplugin_5.41.3.0-1_amd64.deb # apt-cache show google-talkplugin=5.41.3.0-1 Package: google-talkplugin Version: 5.41.3.0-1 Architecture: amd64 Maintainer: Voice and Video Chat Linux TeamInstalled-Size: 17703 Depends: libasound2 (>= 1.0.23), libc6 (>= 2.14), libcairo2 (>= 1.2.4), libgcc1 (>= 1:4.1.1), libgdk-pixbuf2.0-0 (>= 2.22.0), libglib2.0-0 (>= 2.14.0), libgtk2.0-0 (>= 2.24.0), libpango1.0-0 (>= 1.14.0), libstdc++6 (>= 4.6), libx11-6, libxcomposite1 (>= 1:0.3-1), libxext6, libxfixes3, libxrandr2 (>= 2:1.2.99.2), libxrender1, libv4l-0 Section: main/web Priority: optional Filename: pool/main/g/google-talkplugin/google-talkplugin_5.41.3.0-1_amd64.deb Size: 7800474 SHA1: 0bbc3d6997ba22ce712d93e5bc336c894b54fc81 MD5sum: 03ea81590baa680d286d28533c4d40e1 Description: Google Talk Plugin The Google Talk Plugin is a browser plugin that enables you to use Google voice and video chat to chat face to face with family and friends. . This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (http://www.openssl.org/). This product includes cryptographic software written by Eric Young (e...@cryptsoft.com). Description-md5: 90dff11722a74dd57469b3d3a05ec44b -- greg
Re: A case study of a new user turned off debian
Graham Wilson [EMAIL PROTECTED] writes: Regardless. Having people install fresh machines with things like Postgres 7.2 is just embarrassing. I am not embarrassed. Well perhaps you should be. Whenever they ask for support those users will be told the version their running is hopelessly out of date and all the trouble their having is because of their choice of version. (Actually the postgres list does an admirable job of attempting to provide support for 7.2 and even 7.1 but inevitably the answer turns out to be that problem was fixed in 7.3. Or now, 7.4.) Those users will also be struggling with major production issues like being unable to run 24x7 because of required periodic maintenance (vacuum and reindex) that require downtime. Basically, given that 7.3 has been out for an entire release cycle (7.4 will be released within days), giving 7.2 to new users is simply ridiculous. The same holds for having new users install 2.2 kernels or XFree86 4.1. I mean, sure there are cases when these things are passable or even useful, but by default telling a new user that these awful buggy releases are what he or she should be installing on a fresh install is just, well, as I said, embarrassing. Personally I'm of the opinion that stable is useless. It certainly has no use for me. Perhaps if I ran a production server on debian I might think otherwise but I rather doubt it. When I had production servers they all ran 2.4 and needed the latest stable releases of anything important like database, mail, web server services. If I ran production servers on debian today I would probably pick an arbitrary date off snapshot.debian.org and declare that my stable. If I had security problems I would pick a date recent enough to have the security fixes, test it, and declare it stable. It wouldn't be guaranteed to be bug-free, but then nothing is. Stable has tons of minor bugs that no upstream maintainer would listen to because they were fixed aeons ago anyways, or more likely are no longer relevant in current sources. -- greg
Re: A case study of a new user turned off debian
Darren Salt [EMAIL PROTECTED] writes: I demand that Greg Stark may or may not have written... What does that mean? -- greg
Re: A case study of a new user turned off debian
Darren Salt [EMAIL PROTECTED] writes: If apt kept even a single old revision in its cache then rolling back could be as simple as apt-get install -t previous libc6 That would be good. (Similarly for aptitude, of course.) One question occurs, however: should this also (try to) roll back packages on which $PREVIOUS depends? Well that would be the main advantage. The user can do the above fairly easily if he can find the .deb but he has to track down all the dependencies that APT broke trying to install the broken package. It should roll back anything to previous that depend on a newer version of libc6 than being installed and recursively. -- greg
Re: A case study of a new user turned off debian
Darren Salt [EMAIL PROTECTED] writes: BTW, no need to Cc: me - or did Gnus not notice the Mail-Followup-To header? Uhm. What Mail-Followup-To header? I didn't receive one on this message, perhaps it's stripped by the mail server? Or perhaps you're mistaken about it being included? I've attached the original message including headers. ---BeginMessage--- I demand that Greg Stark may or may not have written... Darren Salt [EMAIL PROTECTED] writes: I demand that Greg Stark may or may not have written... What does that mean? It's (more or less) from The Hitch-Hiker's Guide to the Galaxy. The bit in question concerns two philosophers who are trying to stop the computer Deep Thought from being programmed to find the Answer to the Question of Life, The Universe and Everything. If you want to know more, then I suggest that you read the first of the books, watch the TV series or listen to the original radio series... BTW, no need to Cc: me - or did Gnus not notice the Mail-Followup-To header? -- | Darren Salt | linux (or ds) at | nr. Ashington, | woody, sarge, | youmustbejoking | Northumberland | RISC OS | demon co uk | Toon Army | Let's keep the pound sterling Barney of Borg: I assimilate you; you assimilate me... -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] ---End Message--- -- greg
Re: A case study of a new user turned off debian
Josip Rodin [EMAIL PROTECTED] writes: It would be helpful if people wouldn't make sweeping generalizations all the time. All the time? ... -- greg
Re: A case study of a new user turned off debian
Josip Rodin [EMAIL PROTECTED] writes: On Tue, Nov 04, 2003 at 01:53:36AM -0600, Chris Cheney wrote: It would be helpful if Debian could even be installed on machines newer than about 2 years old. It would be helpful if people wouldn't make sweeping generalizations all the time. Only a part of the new machines are made with hardware that needs particularly special drivers, there's plenty of it that works fine. Sure, as long as I didn't need the network interface or access to all my hard drives 2.2 would have been fine. Hm... Regardless. Having people install fresh machines with things like Postgres 7.2 is just embarrassing. -- greg
Re: A case study of a new user turned off debian
Chris Cheney [EMAIL PROTECTED] writes: Er no they are not all in the pool. The only packages in the pool are the current versions for stable/testing/unstable/experimental. There are also the few packages that haven't been completely compiled on all archs yet and so are still left in archive while this is being done. snapshot.debian.net has nearly every deb since 2002/06/04, but it is not an official debian repo afaik. Hm, it appears to be true that not every single revision is here. But there are certainly more than just the unstable and testing revisions too: libc6_2.2.5-11.2_i386.deb 26-Sep-2002 11:32 3.2M libc6_2.2.5-11.5_i386.deb 08-Jun-2003 01:32 3.2M libc6_2.3.2-9_i386.deb 26-Oct-2003 21:47 3.6M libc6_2.3.2.ds1-8_i386.deb 30-Oct-2003 12:17 4.6M libc6_2.3.2.ds1-9_i386.deb 02-Nov-2003 11:18 4.6M As it turned out 2.3.2-9 was a perfectly reasonable revision to roll back to. -- greg
A case study of a new user turned off debian
I finally convinced a sysadmin friend of mine that Debian was the way and the light. He started a new job and showed up on his first day to set up his machine by installing Debian. In short, things went horribly wrong and he started this new job by wasting two days picking up the pieces. He's now very leery of suggesting using Debian on other machines at work or of using it himself at home. What started the chain of events was that a fairly routine minor bug bit the latest libc6 release. He's an experienced sysadmin though and wasn't the least bit fazed by that. What drove him batty was that it was so hard to recover from the mess and all the obvious avenues just made the problem worse. All he had to do was install an older version of libc6 and every other package would have been happy. All the infrastructure is there to do this, the old packages are all on the ftp/http sites, the package may even be sitting in apt's cache. But there's no interface for it. The only interface for rolling back is switching the entire machine to an earlier distribution and telling apt to try to downgrade -- which is unlikely to work. And worse, every time you run apt it only downloads and unpacks *more* packages, all of which, of course, fail as well. What would be really neat would be if aptitude or perhaps even apt checked for earlier versions of the package in the pool and offered them as options if the current one fails to configure. -- greg
Re: A case study of a new user turned off debian
Julian Mehnle [EMAIL PROTECTED] writes: First, I think what Daniel Jacobowitz said is entirely true. Why didn't you start with testing? Sure testing is less likely to trigger this. But testing isn't infallible either. And it shouldn't be mean Debian shouldn't have better error handling. The easier it is for people to manage a Debian system when things go wrong the better, regardless of how much the chances of going wrong are minimized. All he had to do was install an older version of libc6 and every other package would have been happy. All the infrastructure is there to do this, the old packages are all on the ftp/http sites, the package may even be sitting in apt's cache. But there's no interface for it. Wrong. If, on a unstable system, Apt sources for testing are also listed in /etc/apt/sources.list, you can always do a `apt-get -t testing install libc6` or `apt-get install libc6/testing`. Unfortunately not. apt won't downgrade an already installed package like that. In any case that doesn't really help. What do you do if testing is 2 months old as is often the case with things like mozilla. Or if installing the testing version is exactly what caused the problem? All I want to do is give up on this new version and go to an earlier version, most likely the version I had installed five minutes ago. Downgrading to testing would probably require a whole new set of libraries and more work. Or, you could create a file /etc/apt/preferences and pin the testing version of the package with a high enough priority. See `man apt_preferences`. Then do a `apt-get dist-upgrade`. That's about the last place I would send a new user. I read that man page about three times during this crisis before I decided it would be hopeless to try to explain this procedure online. This is what I meant about there not being an interface. If apt said Hm, version 1.2 of libc failed to configure, would you like to install the previous version (1.1) from testing and hold back the following packages that depend on the new one (awk, grep, sed) [Yn]? That would be an interface. If it didn't do anything but apt-get -f -t testing libc did that automatically without explaining what it was doing, that would be an interface. Telling people, go edit this random file with to set pin priorities for things to arbitrary numbers, find out what package dependencies fail, add those to your list of pin priorities, etc. That's not a useful interface for this case. In any case having the granularity of stable, testing, unstable really doesn't help. All the package versions are in the pool. I want to be able to tell apt to try such and such version, or at least put back the version I had before and restore whatever other packages it must to satisfy dependencies. The only interface for rolling back is switching the entire machine to an earlier distribution and telling apt to try to downgrade -- which is unlikely to work. And worse, every time you run apt it only downloads and unpacks *more* packages, all of which, of course, fail as well. This is probably one of the worst ways of rolling back few or even a single package. Well it's the only way a new admin has idea about. He was told to put stable, testing, or unstable in a spot, that didn't work. So he can try putting a different word in that spot. He can't magically pull apt_preferences from thin air and decide to go editing a file he's never heard of. And even if it did, it wouldn't really do what he wanted. I didn't say it was a good idea or that it was going to work. My whole point is that that approach sucks and we should make something more effective rather than leave the admin stuck. -- greg
Re: A case study of a new user turned off debian
Brian May [EMAIL PROTECTED] writes: On Mon, Nov 03, 2003 at 03:05:56PM -0500, Greg Stark wrote: What started the chain of events was that a fairly routine minor bug bit the latest libc6 release. He's an experienced sysadmin though and wasn't the least What (probably; I am guessing a bit) continued the chain of events: ... All wonderful guesses, but not really at all relevant to what Debian could do better to handle the situation. All I'm trying to do is look at what I did as an experienced Debian user and trying to figure out a) why a new Debian user's instincts were all wrong, b) why the existing tools made the problem worse, and c) why the tools can't just do what I did or at least make it easier to reach the right approach. The main difference between apt's error handling and my own was that I was aware that I could simply roll back to a version other than the current unstable or testing. There are many versions in between and rolling back to testing was overkill and would have caused tons more problems than it would have solved. In the case of someone tracking testing there isn't even any such option, (rolling back to stable being laughable). So all it would take to make the tools handle this would be to somehow make apt aware of more revisions of packages. They're all in the pool after all. Short of making some king of humongous mega-Packages file with every revision of every package -- which apt wouldn't scale up to anyways -- they're currently unavailable to APT. The low hanging fruit here would be to have APT keep packages you had installed yourself in the cache rather than immediately discarding them as soon as they're upgraded. At a minimum keeping one extra revision would at least let you roll back. Something more flexible keeping old revisions for n days after being replaced would be even cooler. Currently recovering from a package failure means manually downloading a single .deb and using dpkg to install it, and then tracking down the right versions of the dependencies for that .deb, and trying to install those, and ... basically reverting to RedHat-style manual dependency resolution; If apt kept even a single old revision in its cache then rolling back could be as simple as apt-get install -t previous libc6 or perhaps a little less automatic, apt-cache show libc6 to list the available revisions then explicitly apt-get install libc6:2.3.2-8 Actually this wouldn't really have helped my friend at all because he was unlucky enough that the *first* version of libc6 from unstable that he saw happened to be the buggy one. That doesn't really happen that often to libc6 so he had particularly bad luck there. -- greg
Re: How come X seems insistent on managing my XF86Config now?
Branden Robinson [EMAIL PROTECTED] writes: [CCing debian-x because if I get hit by a bus or arrested by Secretary Ashcroft today, the following will be important for the inheritor of our XFree86 packages to know.] Ah, for some reason I had trouble finding this mailing list. I poked around /usr/share/doc/xserver-xfree86 and the xsf web pages and couldn't find any mention of a separate mailing list. I imagine it's there and I'm just blind. On Thu, Oct 09, 2003 at 12:20:01AM -0400, Greg Stark wrote: Branden Robinson [EMAIL PROTECTED] writes: Just answer the questions. Well there seem to be a lot of them. And a lot of them don't seem to have default answers. Or in some cases any reasonable answer given my setup. Actually, they all have default answers. A few have blank default answers under most circumstances (like PCI bus ID and XKB variant), which it is safe to leave blank. Uh, in that case something's broken. I held down the return key and it skipped a bunch of questions but then got stuck on one. I had to choose an answer. Then I held down the return key and it got stuck a few questions later. At least 3-4 questions needed manual answers. In any case I was more worried that it would touch my config file. With your assurances I went ahead and answered the questions and there weren't nearly as many of them as I feared. In the meantime, I suggest just hitting enter until the questions go away (if you're using the dialog frontend -- if not, do the equivalent for your frontend). This was in an xterm. -- greg
Re: How come X seems insistent on managing my XF86Config now?
Branden Robinson [EMAIL PROTECTED] writes: Just answer the questions. Well there seem to be a lot of them. And a lot of them don't seem to have default answers. Or in some cases any reasonable answer given my setup. It doesn't insist on managing your XF86Config-4 file now, it just insists on asking you questions because I need to (greatly) improve the PRIORITY_CEILING logic which I failed to implement correctly. Ick. That's, uhm, really really annoying, but I guess you know that. Please see URL: http://people.debian.org/~branden/xsf/FAQ (near the end) if you'd like to know what's going on. I did check there. But it seemed to say there were a million reasons why it shouldn't be asking me all these questions and the only advice it gave was how to convince it to take control back if it stopped. I assumed if it wasn't managing my config file it wouldn't ask me the questions. -- greg
Re: versions of -dev packages
Nikita V. Youshchenko [EMAIL PROTECTED] writes: Here is a script that finds different versions of installed binary packages with the same source package name. Running this script on some of my systems gave me lots of interesting information ... Indeed it seems to be finding me lots of packages like these: bash-2.05b# dpkg -L xserver-svga Package `xserver-svga' does not contain any files (!) bash-2.05b# dpkg -L guile1.3 Package `guile1.3' does not contain any files (!) bash-2.05b# dpkg -L libperl5.6 Package `libperl5.6' does not contain any files (!) I thought when a package's last file was replaced the package was marked purged. These are all marked rc, ie, indicating that config files are present. -- greg
Re: Status of fab and of his pkgs [Was: ITA: man-db, groff]
Joey Hess [EMAIL PROTECTED] writes: [ Moving to -devel. ] Fabrizio Polacco wrote: It's written everywhere: DON'T RUN MAN AS ROOT! Having no idea where it moved from or what the context was I'll blithely wade in with an opinion: Just because the bug is documented doesn't mean it's not a bug. There's really no excuse for programs that don't work as root. Debian's man package is, at the very least, guilty of violating the principle of least suprise. When I type man mount, I expect to see a man page. Sometimes you _have_no_choice_ but to run it as root, and it so it should always work in those cases. -- greg
Re: Obsolete software in /usr/local
Ben Armstrong [EMAIL PROTECTED] writes: On Sat, Jan 06, 2001 at 03:49:07PM -0500, Greg Stark wrote: I've been meaning to bring this up for a while: Why on earth was this change ever made? I can't speak for whoever made the change, but I suspect that it is because LD_LIBRARY_PATH can be used to support libraries in /usr/local/lib for programs in /usr/local/bin without messing up anything that ships with Debian. There's no difference between having /usr/local/lib in LD_LIBRARY_PATH and just adding it at the end of /etc/ld.so.conf except for: 1) It won't be cached so it'll be slightly slower 2) setuid binaries will break 3) Each user manually has to add it This violates the more general principle in the debian policy that no environment variables should have to be set for things to work right. Of course there's no specific package that doesn't work right since this only affects locally installed software. So it doesn't violate the specific meaning of that requirement, but it definitely breaks with the model. -- greg
How to report bugs (was glibc thing)
Eray Ozkural (exa) [EMAIL PROTECTED] writes: The preprocessor macro seems to be undefined. There are also other subtleties while using pthread lib, such as the __USE_UNIX_98 stuff, which I really don't know (I only use c++, and UNIX_98 sections don't seem to come along. Why is that?) ... Now, I think what I say is fairly obvious but the package maintainer dismisses this bug like this with a changelog entry: No, while there were maybe more polite ways to ask for more information I'm sure it gets tiresome doing so repeatedly on a large package like glibc. The fact is this is *not* a useful bug report. To be useful a bug report must answer the following questions: 1) What did you do to cause the bug 2) What did you observe 3) What did you expect to observe 4) Why do you think what you observed was a bug In this case saying it seems to be undefined leaves a million and one questions unanswered. Do you mean you couldn't find it in the include files? Or did your program fail because it wasn't defined? Which include files were you including? What other defines did already have? ... A useful bug report would have been of the form: I compiled the following C or C++ file with the following options: ... I got the following error indicating this macro wasn't defined: ... I expected the macro to be defined and this file should compile because: ... Just writing in your conclusions is useless 90% of the time. Your conclusions may be right but the maintainer doesn't have ESP and can't necessarily deduce where they came from and what the bug is. -- greg
Re: Obsolete software in /usr/local
Ben Armstrong [EMAIL PROTECTED] writes: On Fri, Jan 05, 2001 at 01:31:42PM -0400, Ben Armstrong wrote: Changes in version 1.9.2: Removed /usr/local/lib from the default /etc/ld.so.conf for Debian (Bug#8181). oops, except that mod is *ancient*. way before potato. dunno why this would change between potato and woody. I've been meaning to bring this up for a while: Why on earth was this change ever made? /usr/local/lib is the supported place for the local admin to put libraries available to all programs. Every admin who wants to use /usr/local as intended will have to manually add this entry to have a working system. In general debian packages ship with empty /usr/local directories but are configured to automatically use files put in those directories by the administrator. Why should /usr/local/lib be any different? Incidentally, by the same logic /usr/local/bin should be in the standard path and /usr/local/sbin should be in root's standard path as well. -- greg
Debugging X libraries
There are no packages that have shared X11 libraries built with debugging symbols, are there? I wouldn't want to ask such a thing of the maintainer, there are enough complexities in the X packages as it is, but nobody would happen to have such an libX11.so for libc5 would they? -- greg
Re: GTK problems - not compiling
So it's not a bug and we're satisfied with the following situation? Some programs from other linux systems or even hamm systems will randomly seg fault. If any libraries from other linux distributions or even hamm systems are present on a potato machine when programs are compiled the resulting binaries may randomly crash. There's no supported way to compile programs on a potato system that will run on a hamm system or any other glibc2.0 distribution. This may impact our responsiveness handling security issues in hamm. Really I think the glibc maintainers made a fundamental error in overestimating the sophistication of linux's shared library versioning scheme. A little work should have gone into ld.so before trying this experiment. In linux two libraries with the same soname are required to be fully compatible. period. Otherwise it isn't possible to exchange binaries or libraries between two machines. One-direction compatibility is simply not adequate to justify keeping the same soname. greg
Re: Intent to package: GREED
Leon Breedt [EMAIL PROTECTED] writes: Regarding curl, I'll be packaging an SSL enabled version only, as it seems that policy doesnt cover a source package building for both US non-US. There are a couple packages which do this, mutt-i etc. I think they all make some minor alteration like touching a file and then rebuid and upload to nonus including a complete duplicate of the source package. I struggled for a while to build nonus versions of fetchmail and zephyr build against kerberos, but couldn't do it to my liking. Too many tools assumed they could run over debian/control and pick out package names themselves and too many assumed that every package listed in debian/control should be built. Really someone should come up with a single library of debian/* parsing routines, preferably compiled into a library which could be linked into perll a bash builtin module, dpkg-deb, and whatever else. Then everyone would use the same parsing routines to access these files. If a new feature is needed then it could be added to one place. Well, that's just a thought, there are disadvantages to that approach too. But I found it very frustrating at the time. Incidentally Just because the binary can't be exported doesn't mean the source can't be exported. fetchmail and zephyr, for example, include only hooks for authentication. Absolutely no hooks directly to encryption. (I think.) So there's no reason the source shouldn't be exportable. The resulting binary would have hooks in the form of dynamic linking to the encryption routines though... greg
Re: xfstt 0.9.99 uploaded - some news with it
[EMAIL PROTECTED] (Marco d'Itri) writes: On Apr 28, Stephen J. Carpenter [EMAIL PROTECTED] wrote: The MAIN process runs as root. This is because if it recieves a kill signal it needs to clean up its pid file. Can't do that if it was not root (not without the permissions on /var/run changeing) This is NOT an excuse for running as root. Make it create the pidfile in /var/run/xfstt like other similar programs do. Couldn't we just set the sticky bit on /var/run ? I guess this doesn't solve the problem for non-debian systems, hm.
Re: netscape crashes on potato
So the people who don't see crashes, which version of Netscape are you using? Do you use java successfully in Netscape? Do you have plugger installed? Do you have any other plugins installed? Which versions of libc are you using? # dpkg -l \*netscape\* | grep ^hi hi netscape-base-4 5 Popular World-Wide-Web browser software (bas hi netscape-base-4 4.5-1 Popular World-Wide-Web browser software (bas hi netscape-java-4 4.5-1 Popular World-Wide-Web browser software (jav Unfortunately all the later versions have an erroneous dependency on glicb2.1 (I'm pretty certain Netscape hasn't released any glibc 2.1 dependent binaries!) Actually, would it be possible to bug the netscape maintainer into releasing libc5 packages? That might solve a lot of problems since the libc5 version isn't actually beta... Remco Blaakmeer [EMAIL PROTECTED] writes: On Fri, 30 Apr 1999, Dan Nguyen wrote: ... Is this just me??or is this a glibc2.1 issue? No, I have these problems with glibc2.0. I would *love* to find out why some people don't have problems and fix this. My netscape just loves to go insane. I try to visit a page, and it begins to take up 100% cpu time, and doesn't stop. Generally after a minute of waiting to see that it's gone I end up killing it. I've had this problem with 4.5 and 4.51 I've seen this problem too. 100% cpu and 100% memory too. I've never seen a 128Mb desktop thrash its swap so badly before. Linux really doesn't behave any better in this case than I remember 1.2.13 doing, sigh. I've had this, too, with both navigator 4.0x and navigator 4.5x. Try disabling java support and see if that helps. It helped me a lot. Somehow, either the java vm in glibc2 netscape is very buggy or there are a whole lot of buggy java applets out there, or both. I have java disabled already because any java reliably causes Netscape to seg fault immediately. As soon as I hit a page with an applet in it boom, gone. If I had to guess I would guess the memory thrashing was some loop running off the end of a buffer and going through all of its heap, possibly related to javascript stuff. Hm, what kind of gc does javascript use? If it's something like the Boehm conservative gc... Well this is all conjecture. greg
Re: Intent to package KerberosV
Bear Giles [EMAIL PROTECTED] writes: My plan, back when I was exploring the idea of a US-only package and/or derived distribution, was to use shared libraries and create a special null Kerberos package which would return error codes, something very close to the Kerberos 'bones' package (which is not export restricted). The resulting package should be exportable and Kerberos functionality would be enabled whenever someone installed the Kerberos packages. This wouldn't hold water unfortunately. US Crypto export law includes prohibitting software with hooks specifically for crypto. So generic hooks for arbitrary filters are ok, hooks just for authentication (such as the fetchmail source) are ok, but binary packages are certain to include calls to crypto routines, which is verbotten. The second is that both Kerberos and SSLeay use libcrypto Maintainers could change the library name expected, but it's a pain. Uhoh, is this a problem for our existing kerberos 4 packages (that everyone seems to have forgotten about, hmph.)? I haven't gotten any bug reports about conflicts with libcrypto and it's definitely included. So far, I haven't considered adding the Kerberos compile options, because of doubt about this and also because no-one has ever asked for it. I tried to build nonus versions of zephyr and fetchmail with kerberos support a while back and found our tools just couldn't handle a source package that could produce different binary packages depending on the whim of the user. (This would have been especially neat since libzephyr contains all the kerberos calls, I could have produced a single libzephyr-i that switched the behaviour of all the zephyr clients.) Alas, my current solution is to just make it really easy for other people to build their own kerberized packages. To build a kerberized set of zephyr packages you just do debian/rules WITHOUT_KRB4= binary and for fetchmail I think you can just rebuild with kerbero4kth-dev installed and it dtrt. This satisfied my immediate needs, I can get my mail and read mit zephyrs, but doesn't really help the kerberos cause. I do want to get the kerberos pam module packaged but don't know anything about it myself. greg
Re: Install-time byte-compiling: Why bother?
[EMAIL PROTECTED] writes: Obviously I've misunderstood the behaviour of Emacs here - I'd assumed that the internal form was the same regardless of whether one got there via byte-compiling or not. Apparently this isn't the case! it certainly isn't. I have to question your results too, the times I had to work with Gnus or W3 with only a single file non-byte-compiled (to debug it) I've found they were *unusably* slow. Also you should try them on a machine that's a little memory starved. You'll find any substantial package will take huge amounts of memory if you run it without byte-compiling it. That said, I would agree with leaving any small to moderate sized packages non-byte-compiled. I think that's our current policy? Most packages don't really gain much from byte-compiling. Just large packages like Calc, W3, Gnus etc. (In fact I suspect the main determining factor is whether the package makes heavy use of the cl package...) greg
Where are the boot floppies?
They don't seem to be in the Incoming mirrors, are they somewhere else? There's someone who posts periodically saying he has an automatically built iso image and boot floppies somewhere but I can't find any of his posts in the archives. Thanks, greg -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: bashisms
Adrian Bridgett [EMAIL PROTECTED] writes: To this end we really need a short document which details the differences, the right way to do things and the definite No-Nos. Maybe this should go into the packaging manual, but initially it is probably better to have it seperate as it would change quite a bit at the start. I'd be happy to write such a document, but I need help as I only really know two things: cp fred.{txt,html} dest- cp fred.txt fred.html dest function f() {echo Hi;}- f() {echo Hi;} Also, never use or as a shortcut to redirect both stderr and stdout. It's an evil syntax borrowed from csh and it's why dpkg-ftp currently spews lots of extra output but otherwise works perfectly. And don't use nonstandard flags on builtins including `read -r', or any flag on `type'. greg -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .
Re: be careful with Replaces, please
Yann Dirson [EMAIL PROTECTED] writes: Greg Stark writes: We've got be be a little more careful with the Replaces header. I just installed the libc6 version of comerr, and dpkg helpfully deinstalled e2fsprogs. That's perfectly normal if you previously had e2fsprogs = 1.10-6, which does contain libcom_err ! You should probably install e2fsprogsg to replace e2fsprogs. I know i should install a new e2fsprogs, obviously. I was just suggesting we should find some way to avoid the default action being to deinstall packages that aren't really being completely replaced. I'm not sure what better to do though. Incidentally, the g suffix on packages indicates they're a libc6 package. Usually it's only needed on libraries since you might want both libc6 based and libc5 based libraries installed at the same time. Usually we don't care about having two versions of binary packages installed simultaneously. I'm not exactly sure why we have both e2fsprogs and e2fsprogsg, possibly because e2fsprogs are so essential that we're being paranoid about future libc6 bugs. greg -- TO UNSUBSCRIBE FROM THIS MAILING LIST: e-mail the word unsubscribe to [EMAIL PROTECTED] . Trouble? e-mail to [EMAIL PROTECTED] .