Re: distributed module configuration
Sam Ravnborg wrote: On Wed, Feb 13, 2008 at 12:54:33AM -0800, David Miller wrote: From: Sam Ravnborg <[EMAIL PROTECTED]> Date: Wed, 13 Feb 2008 09:45:41 +0100 So we could do: config foo tristate "do you want foo?" depends on USB && BAR module obj-$(CONFIG_FOO) += foo.o foo-y := file1.o file2.o help foo will allow you to explode your PC ... Does this fit what you had in mind? Yes it does. Now I'll ask if you think embedding this information in one of the C files for a module would be even nicer? I have no good idea for the syntax and I and not sure what is gained by reducing a driver with one file. Agreed - simple drivers would then be a single file - and thats a good argument. I like the Sam proposal, but maybe we can simplify the rules on "module" segment: some informations are often redundant, dependencies are sometime calculated by config part and sometime by Makefile (and sometime in the Makefile there are some wrong hacks). I would really like a good section like: module foo : file1.o file2.o and let the complex rules in the normal Makefile (which is also good because the complex rules are often not specific to a single driver). But I don't like merging all info in a single file: - not so clean in case of multiple source-file driver - it would be more complex the "copy and paste" from other drivers: most developers are not comfortable with Kconfig and Makefile, so easy to grep others Kconfig/Makefile could help developers not do do strange/wrong hacks. ciao cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: distributed module configuration
Sam Ravnborg wrote: On Wed, Feb 13, 2008 at 12:54:33AM -0800, David Miller wrote: From: Sam Ravnborg [EMAIL PROTECTED] Date: Wed, 13 Feb 2008 09:45:41 +0100 So we could do: config foo tristate do you want foo? depends on USB BAR module obj-$(CONFIG_FOO) += foo.o foo-y := file1.o file2.o help foo will allow you to explode your PC ... Does this fit what you had in mind? Yes it does. Now I'll ask if you think embedding this information in one of the C files for a module would be even nicer? I have no good idea for the syntax and I and not sure what is gained by reducing a driver with one file. Agreed - simple drivers would then be a single file - and thats a good argument. I like the Sam proposal, but maybe we can simplify the rules on module segment: some informations are often redundant, dependencies are sometime calculated by config part and sometime by Makefile (and sometime in the Makefile there are some wrong hacks). I would really like a good section like: module foo : file1.o file2.o and let the complex rules in the normal Makefile (which is also good because the complex rules are often not specific to a single driver). But I don't like merging all info in a single file: - not so clean in case of multiple source-file driver - it would be more complex the copy and paste from other drivers: most developers are not comfortable with Kconfig and Makefile, so easy to grep others Kconfig/Makefile could help developers not do do strange/wrong hacks. ciao cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux 2.6.24
Rafael J. Wysocki wrote: On Friday, 25 of January 2008, [EMAIL PROTECTED] wrote: On Fri, 25 Jan 2008 10:10:11 +0100, "Giacomo A. Catenazzi" said: - you will introduce a new step on git management: Every changeset is compile-tested before going out to the world. I think this can be done automatically, and I think that one or two configurations are enough to find most of the problems. It's true that a compile on x86 and a compile on PowerPC Please add IA-64 and ARM at the very least. My point was about "obvious" errors, and I really think that one or two configuration will found most of these, doing in an automatic way, and without delay the process. Anyway more test are surely better. BTW, IIRC there are already few "testing farms" which tests automatically a lot of environment and configuration (IIRC, also run time tests). should flush out most of the truly stupid mistakes, but those are usually found and fixed literally within hours. Anyhow, the proper time for test compiles is *before* it goes into the git trees at all - it should have been tested before it gets sent to a maintainer for inclusion. few hours, but a lot of changeset will broke bisect (few doc tell us how to continue bisecting on compile errors). But I agree with you. That's correct, but I'm not sure how to enforce it. As usual, "One level more indirections" ;-) . Along a spamfilter, we (or Linus) need a patch filter. ciao cate PS: I don't want to be pessimistic. I only want to raise the problem, to see if it is possible to improve testing environment without affecting the development of Linux. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux 2.6.24
Linus Torvalds wrote: On Thu, 24 Jan 2008, Linus Torvalds wrote: The release is out there (both git trees and as tarballs/patches), and for the next week many kernel developers will be at (or flying into/out of) LCA in Melbourne, so let's hope it's a good one. Since I already had two kernel developers asking about the merge window and whether people (including me) traveling will impact it, the plan right now is to keep the impact pretty minimal. So yes, it will probably extend the window from the regular two weeks, but *hopefully* not by more than a few days. As a tester, I'm not so happy. The last few merge windows were a nightmare for us (the tester). It remember me the 2.1.x times, but with few differences: - more changes, so bugs are unnoticed/ignored in the first weeks or - or people are pushing more patches possible, so they delay bug corrections to later times (after merge windows). If it continues so, I should stop testing the kernel on the merge windows (but it seems that other testers already give up the early merge phase). As a tester I would like: - slow merges, so that developer could rebase and test (compile test) the interaction of the new code. - you will introduce a new step on git management: Every changeset is compile-tested before going out to the world. I think this can be done automatically, and I think that one or two configurations are enough to find most of the problems. Happy LCA, ciao cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux 2.6.24
Linus Torvalds wrote: On Thu, 24 Jan 2008, Linus Torvalds wrote: The release is out there (both git trees and as tarballs/patches), and for the next week many kernel developers will be at (or flying into/out of) LCA in Melbourne, so let's hope it's a good one. Since I already had two kernel developers asking about the merge window and whether people (including me) traveling will impact it, the plan right now is to keep the impact pretty minimal. So yes, it will probably extend the window from the regular two weeks, but *hopefully* not by more than a few days. As a tester, I'm not so happy. The last few merge windows were a nightmare for us (the tester). It remember me the 2.1.x times, but with few differences: - more changes, so bugs are unnoticed/ignored in the first weeks or - or people are pushing more patches possible, so they delay bug corrections to later times (after merge windows). If it continues so, I should stop testing the kernel on the merge windows (but it seems that other testers already give up the early merge phase). As a tester I would like: - slow merges, so that developer could rebase and test (compile test) the interaction of the new code. - you will introduce a new step on git management: Every changeset is compile-tested before going out to the world. I think this can be done automatically, and I think that one or two configurations are enough to find most of the problems. Happy LCA, ciao cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux 2.6.24
Rafael J. Wysocki wrote: On Friday, 25 of January 2008, [EMAIL PROTECTED] wrote: On Fri, 25 Jan 2008 10:10:11 +0100, Giacomo A. Catenazzi said: - you will introduce a new step on git management: Every changeset is compile-tested before going out to the world. I think this can be done automatically, and I think that one or two configurations are enough to find most of the problems. It's true that a compile on x86 and a compile on PowerPC Please add IA-64 and ARM at the very least. My point was about obvious errors, and I really think that one or two configuration will found most of these, doing in an automatic way, and without delay the process. Anyway more test are surely better. BTW, IIRC there are already few testing farms which tests automatically a lot of environment and configuration (IIRC, also run time tests). should flush out most of the truly stupid mistakes, but those are usually found and fixed literally within hours. Anyhow, the proper time for test compiles is *before* it goes into the git trees at all - it should have been tested before it gets sent to a maintainer for inclusion. few hours, but a lot of changeset will broke bisect (few doc tell us how to continue bisecting on compile errors). But I agree with you. That's correct, but I'm not sure how to enforce it. As usual, One level more indirections ;-) . Along a spamfilter, we (or Linus) need a patch filter. ciao cate PS: I don't want to be pessimistic. I only want to raise the problem, to see if it is possible to improve testing environment without affecting the development of Linux. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Why is the kfree() argument const?
[EMAIL PROTECTED] wrote: Giacomo Catenazzi wrote: const No writes through this lvalue. In the absence of this qualifier, writes may occur through this lvalue. volatile No cacheing through this lvalue: each operation in the abstract semantics must be performed (that is, no cacheing assumptions may be made, since the location is not guaranteed to contain any previous value). In the absence of this qualifier, the contents of the designated location may be assumed to be unchanged except for possible aliasing. Well, I'm still wondering if there is not something dangerous or weird about declaring the argument of kfree() to be const... It should be only cosmetic thing (and few warnings in some not yet identified cases). Since the content of the referenced object is unlikely to be declared volatile, the compiler should be allowed to assume that its content was not changed, except for possible aliasing. But what happens if the compiler can also prove there is no aliasing? In that case, he should be allowed to assume that the content pointed to was not modified at all, right? I doesn't follow it. Anyway C has the "as-if" rule, which it mean: the compiler could optimize as far the result doesn't change (time are not issues, and result could change if a valid compiler could give the same results). So a very smart compiler (which should compile all units at the same time) could do good things without need of explicit register, static (with some exceptions), const, volatile (if it very smart it know about system, signals, and it can set "volatile" on need), restrict, ... Fortunately, kmalloc is not declared with attribute malloc in the kernel, so there should be no problem, but if it were (and, actually, I've not found why it wasn't), the compiler would be able to tell that *s1 *cannot* be aliased, and therefore decide to move val = s1->i *after* having called kfree(). In that case, we would clearly have a bug... IIRC kmalloc(0) return an alias (but not so relevant in this discussion). Hmm. C is used not to do much optimization. One thing to remember is that function could have a lot of side effects, so compiler will/should never optimize in your way (but if compiler know exactly how kfree work internally). C "const" is a lot weaker to C++ "const". BTW, I doesn't like const in kfree, but I was talking about weak "const" in C. So, although this should currently work, code which breaks if you do a legitimate modification somewere else looks quite dangerous to me. for sure! But C is anal in: "users know better than compiler on what they want to do", so compiler cannot do big optimizations C, without breaking C rules, and used can do nasty things playing with hidden pointers. ciao cate PS: use lkml rule: "do CC: to all relevant people!" -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Why is the kfree() argument const?
Jakob Oestergaard wrote: On Thu, Jan 17, 2008 at 01:25:39PM -0800, Linus Torvalds wrote: ... Why do you make that mistake, when it is PROVABLY NOT TRUE! Try this trivial program: int main(int argc, char **argv) { int i; const int *c; i = 5; c = i = 10; return *c; } and realize that according to the C rules, if it returns anything but 10, the compiler is *buggy*. That's not how this works (as we obviously agree). Please consider a rewrite of your example, demonstrating the usefulness and proper application of const pointers: extern foo(const int *); int main(int argc, char **argv) { int i; i = 5; foo(); return i; } Now, if the program returns anything else than 5, it means someone cast away const, which is generally considered a bad idea in most other software projects, for this very reason. *That* is the purpose of const pointers. "restrict" exists for this reason. const is only about lvalue. You should draw a line, not to make C more complex! Changing the name of variables in your example: extern print_int(const int *); int main(int argc, char **argv) { extern int errno; errno = 0; print_int(); return errno; } print_int() doesn't know that errno is also the argument. and this compilation unit doesn't know that print_int() will modify errno. Ok, I changed int to extern int, but you see the point? Do you want complex rules about const, depending on context (extern, volatile,...) ? ciao cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Why is the kfree() argument const?
Jakob Oestergaard wrote: On Thu, Jan 17, 2008 at 01:25:39PM -0800, Linus Torvalds wrote: ... Why do you make that mistake, when it is PROVABLY NOT TRUE! Try this trivial program: int main(int argc, char **argv) { int i; const int *c; i = 5; c = i; i = 10; return *c; } and realize that according to the C rules, if it returns anything but 10, the compiler is *buggy*. That's not how this works (as we obviously agree). Please consider a rewrite of your example, demonstrating the usefulness and proper application of const pointers: extern foo(const int *); int main(int argc, char **argv) { int i; i = 5; foo(i); return i; } Now, if the program returns anything else than 5, it means someone cast away const, which is generally considered a bad idea in most other software projects, for this very reason. *That* is the purpose of const pointers. restrict exists for this reason. const is only about lvalue. You should draw a line, not to make C more complex! Changing the name of variables in your example: extern print_int(const int *); int main(int argc, char **argv) { extern int errno; errno = 0; print_int(i); return errno; } print_int() doesn't know that errno is also the argument. and this compilation unit doesn't know that print_int() will modify errno. Ok, I changed int to extern int, but you see the point? Do you want complex rules about const, depending on context (extern, volatile,...) ? ciao cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Why is the kfree() argument const?
[EMAIL PROTECTED] wrote: Giacomo Catenazzi wrote: const No writes through this lvalue. In the absence of this qualifier, writes may occur through this lvalue. volatile No cacheing through this lvalue: each operation in the abstract semantics must be performed (that is, no cacheing assumptions may be made, since the location is not guaranteed to contain any previous value). In the absence of this qualifier, the contents of the designated location may be assumed to be unchanged except for possible aliasing. Well, I'm still wondering if there is not something dangerous or weird about declaring the argument of kfree() to be const... It should be only cosmetic thing (and few warnings in some not yet identified cases). Since the content of the referenced object is unlikely to be declared volatile, the compiler should be allowed to assume that its content was not changed, except for possible aliasing. But what happens if the compiler can also prove there is no aliasing? In that case, he should be allowed to assume that the content pointed to was not modified at all, right? I doesn't follow it. Anyway C has the as-if rule, which it mean: the compiler could optimize as far the result doesn't change (time are not issues, and result could change if a valid compiler could give the same results). So a very smart compiler (which should compile all units at the same time) could do good things without need of explicit register, static (with some exceptions), const, volatile (if it very smart it know about system, signals, and it can set volatile on need), restrict, ... Fortunately, kmalloc is not declared with attribute malloc in the kernel, so there should be no problem, but if it were (and, actually, I've not found why it wasn't), the compiler would be able to tell that *s1 *cannot* be aliased, and therefore decide to move val = s1-i *after* having called kfree(). In that case, we would clearly have a bug... IIRC kmalloc(0) return an alias (but not so relevant in this discussion). Hmm. C is used not to do much optimization. One thing to remember is that function could have a lot of side effects, so compiler will/should never optimize in your way (but if compiler know exactly how kfree work internally). C const is a lot weaker to C++ const. BTW, I doesn't like const in kfree, but I was talking about weak const in C. So, although this should currently work, code which breaks if you do a legitimate modification somewere else looks quite dangerous to me. for sure! But C is anal in: users know better than compiler on what they want to do, so compiler cannot do big optimizations C, without breaking C rules, and used can do nasty things playing with hidden pointers. ciao cate PS: use lkml rule: do CC: to all relevant people! -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Kernel Development & Objective-C
Diego Calleja wrote: El Tue, 4 Dec 2007 22:47:45 +0100, "J.A. Magallón" <[EMAIL PROTECTED]> escribió: That is what I like of C++, with good placement of high level features like const's and & (references) one can gain fine control over what gets copied or not. But...if there's some way Linux can get "language improvements", is with new C standards/gccextensions/etc. It'd be nice if people tried to add (useful) C extensions to gcc, instead of proposing some random language :) But nobody know such extensions. I think that the core kernel will remain in C, because there are no problems and no improvement possible (with other language) But the drivers side has more problems. There is a lot of copy-paste, quality is often not high, not all developers know well linux kernel, and not well maintained with new or better internal API. So if we found a good template or a good language to help *some* drivers without causing a lot of problem to the rest of community, it would be nice. I don't think that we have written in stone that kernel drivers should be written only in C, but actually there is no good alternative. But I think it is a huge task to find a language, a prototype of API and convert some testing drivers. And there is no guarantee of good result. ciao cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Bug 9246] On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925
Rafael J. Wysocki wrote: On Tuesday, 4 of December 2007, Giacomo A. Catenazzi wrote: Ingo Molnar wrote: hi, * Giacomo Catenazzi <[EMAIL PROTECTED]> wrote: On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925 In last git, I see the following BUGs in various programs. It seems reproducible, but sometime I've hard lookup on poweroff. do you still get this with more recent kernels? We had a number of fixes for memory corruptors since -rc1 - perhaps one of them took care of your problem as well. No, the problem was solved few days after the report. Can you point me to the fix, please? Unfortunately no. To much chaos in that period: I think I incurred into two or three different kernel bugs (and a Debian keyboard bug) in one or two days. Usually I find such important bugs only few times per year, and never together). I tried also git-bisect, but too much runs, to many non-compiling commits, bad environment (the Debian bug) and the quick fix of the kernel bug ;-) stopped me in further searching. ciao cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Bug 9246] On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925
Rafael J. Wysocki wrote: On Tuesday, 4 of December 2007, Giacomo A. Catenazzi wrote: Ingo Molnar wrote: hi, * Giacomo Catenazzi [EMAIL PROTECTED] wrote: On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925 In last git, I see the following BUGs in various programs. It seems reproducible, but sometime I've hard lookup on poweroff. do you still get this with more recent kernels? We had a number of fixes for memory corruptors since -rc1 - perhaps one of them took care of your problem as well. No, the problem was solved few days after the report. Can you point me to the fix, please? Unfortunately no. To much chaos in that period: I think I incurred into two or three different kernel bugs (and a Debian keyboard bug) in one or two days. Usually I find such important bugs only few times per year, and never together). I tried also git-bisect, but too much runs, to many non-compiling commits, bad environment (the Debian bug) and the quick fix of the kernel bug ;-) stopped me in further searching. ciao cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Kernel Development Objective-C
Diego Calleja wrote: El Tue, 4 Dec 2007 22:47:45 +0100, J.A. Magallón [EMAIL PROTECTED] escribió: That is what I like of C++, with good placement of high level features like const's and (references) one can gain fine control over what gets copied or not. But...if there's some way Linux can get language improvements, is with new C standards/gccextensions/etc. It'd be nice if people tried to add (useful) C extensions to gcc, instead of proposing some random language :) But nobody know such extensions. I think that the core kernel will remain in C, because there are no problems and no improvement possible (with other language) But the drivers side has more problems. There is a lot of copy-paste, quality is often not high, not all developers know well linux kernel, and not well maintained with new or better internal API. So if we found a good template or a good language to help *some* drivers without causing a lot of problem to the rest of community, it would be nice. I don't think that we have written in stone that kernel drivers should be written only in C, but actually there is no good alternative. But I think it is a huge task to find a language, a prototype of API and convert some testing drivers. And there is no guarantee of good result. ciao cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: newlist: public malware discussion [Re: Out of tree module using LSM]
Jon Masters wrote: On Mon, 2007-12-03 at 23:45 +0100, Bodo Eggert wrote: Jon Masters <[EMAIL PROTECTED]> wrote: On Thu, 2007-11-29 at 11:11 -0800, Ray Lee wrote: On Nov 29, 2007 10:56 AM, Jon Masters <[EMAIL PROTECTED]> wrote: To lift Alan's example, a naive first implementation would be to create a suffix tree of all of ESR's works, then scan each page on fault to see if there are any partial matches in the tree. Ah, but I could write a sequence of pages that on their own looked garbage, but in reality, when executed would print out a copy of the Jargon File in all its glory. And if you still think you could look for patterns, how about executable code that self-modifies in random ways but when executed as a whole actually has the functionality of fetchmail embedded within it? How would you guard against that? You can't scan all possible code for malware: Take a random piece of code, possibly halting. Replace all halting conditions using a piece of malware. Scan it. If it were possible to detect the malware without false positives, you'd have solved the halting problem. Good. I think you got the point of my sarcasm. My *point* was that we have two different camps of people here: * Those who think some solution is better than none. But we are talking about malicious programs, and so there is a common motto: "Poor Security Can Be Worse Than No Security", so in this field often "none" is better that "some" Really i don't understand why you push such module. Malicious software in few generation (few years) will use alternate methods. So the linux kernel will be worse (and maybe will expose more bugs because of complexity, and no problem are solved) but no problem are solved. See windoze: it is a patch after an other, so the system is complex, unmaintainable and surely not more secure. or do you want to change our behavior as windows users: they compress files before to send it, because of antiviruses policies. If antiviruses will add security, we will not have such big bot-nets and worms from the concurrent OS. Antiviruses offers only a short term cure. ciao cate * Those who want an unobtainable, perfect solution. I'm not criticising, each has their position. However, I was attempting to explain that I do fully "get it" by running through an example of how to work around more elementary on-access scanning schemes. I know that (no matter what marketing exists to the contrary), it is never possible to have perfect anti-malware software. But I do think there is a time and a place for Linux to help make some folks feel safer - on access file scanning isn't evil, and you don't have to use it! Freedom! :-) Having spoken to a few people, I've created the following mailing list, so we can rant away and come up with a list of requirements to present for further discussion. Note that this is a case where I actually expect people to be *happy* with yet another email list :-) http://lists.printk.net/cgi-bin/mailman/listinfo/malware-list Please sign up, and encourage interested third parties to do so too. Let's work this all out. Then I'll come back sometime over the holidays with a summary and some followup. If I had to design a virus scanner interface, I'd e.g. create a library* providing an {open|mmap}_and_scan() function that would give me a clean copy/really-private mapping of a scanned file, and a scan_{blob,file}() function that would scan a block of memory/a file. Although I'm open to the idea, I'm almost 100% convinced that nobody is going to buy modifying userspace applications one at a time. I think there is a legitimate feeling of this needing to be massaged by the kernel on some level. But I might be wrong - don't flame me. Jon. -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Bug 9246] On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925
Ingo Molnar wrote: hi, * Giacomo Catenazzi <[EMAIL PROTECTED]> wrote: On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925 In last git, I see the following BUGs in various programs. It seems reproducible, but sometime I've hard lookup on poweroff. do you still get this with more recent kernels? We had a number of fixes for memory corruptors since -rc1 - perhaps one of them took care of your problem as well. No, the problem was solved few days after the report. Thanks, cate -- To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [Bug 9246] On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925
Ingo Molnar wrote: hi, * Giacomo Catenazzi [EMAIL PROTECTED] wrote: On 2.6.24-rc1-gc9927c2b BUG: unable to handle kernel paging request at virtual address 3d15b925 In last git, I see the following BUGs in various programs. It seems reproducible, but sometime I've hard lookup on poweroff. do you still get this with more recent kernels? We had a number of fixes for memory corruptors since -rc1 - perhaps one of them took care of your problem as well. No, the problem was solved few days after the report. Thanks, cate -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: newlist: public malware discussion [Re: Out of tree module using LSM]
Jon Masters wrote: On Mon, 2007-12-03 at 23:45 +0100, Bodo Eggert wrote: Jon Masters [EMAIL PROTECTED] wrote: On Thu, 2007-11-29 at 11:11 -0800, Ray Lee wrote: On Nov 29, 2007 10:56 AM, Jon Masters [EMAIL PROTECTED] wrote: To lift Alan's example, a naive first implementation would be to create a suffix tree of all of ESR's works, then scan each page on fault to see if there are any partial matches in the tree. Ah, but I could write a sequence of pages that on their own looked garbage, but in reality, when executed would print out a copy of the Jargon File in all its glory. And if you still think you could look for patterns, how about executable code that self-modifies in random ways but when executed as a whole actually has the functionality of fetchmail embedded within it? How would you guard against that? You can't scan all possible code for malware: Take a random piece of code, possibly halting. Replace all halting conditions using a piece of malware. Scan it. If it were possible to detect the malware without false positives, you'd have solved the halting problem. Good. I think you got the point of my sarcasm. My *point* was that we have two different camps of people here: * Those who think some solution is better than none. But we are talking about malicious programs, and so there is a common motto: Poor Security Can Be Worse Than No Security, so in this field often none is better that some Really i don't understand why you push such module. Malicious software in few generation (few years) will use alternate methods. So the linux kernel will be worse (and maybe will expose more bugs because of complexity, and no problem are solved) but no problem are solved. See windoze: it is a patch after an other, so the system is complex, unmaintainable and surely not more secure. or do you want to change our behavior as windows users: they compress files before to send it, because of antiviruses policies. If antiviruses will add security, we will not have such big bot-nets and worms from the concurrent OS. Antiviruses offers only a short term cure. ciao cate * Those who want an unobtainable, perfect solution. I'm not criticising, each has their position. However, I was attempting to explain that I do fully get it by running through an example of how to work around more elementary on-access scanning schemes. I know that (no matter what marketing exists to the contrary), it is never possible to have perfect anti-malware software. But I do think there is a time and a place for Linux to help make some folks feel safer - on access file scanning isn't evil, and you don't have to use it! Freedom! :-) Having spoken to a few people, I've created the following mailing list, so we can rant away and come up with a list of requirements to present for further discussion. Note that this is a case where I actually expect people to be *happy* with yet another email list :-) http://lists.printk.net/cgi-bin/mailman/listinfo/malware-list Please sign up, and encourage interested third parties to do so too. Let's work this all out. Then I'll come back sometime over the holidays with a summary and some followup. If I had to design a virus scanner interface, I'd e.g. create a library* providing an {open|mmap}_and_scan() function that would give me a clean copy/really-private mapping of a scanned file, and a scan_{blob,file}() function that would scan a block of memory/a file. Although I'm open to the idea, I'm almost 100% convinced that nobody is going to buy modifying userspace applications one at a time. I think there is a legitimate feeling of this needing to be massaged by the kernel on some level. But I might be wrong - don't flame me. Jon. -- To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [BUG] New Kernel Bugs
Mark Lord wrote: Ingo Molnar wrote: .. This is all QA-101 that _cannot be argued against on a rational basis_, it's just that these sorts of things have been largely ignored for years, in favor of the all-too-easy "open source means many eyeballs and that is our QA" answer, which is a _good_ answer but by far not the most intelligent answer! Today "many eyeballs" is simply not good enough and nature (and other OS projects) will route us around if we dont change. .. QA-101 and "many eyeballs" are not at all in opposition. The latter is how we find out about bugs on uncommon hardware, and the former is what we need to track them and overall quality. A HUGE problem I have with current "efforts", is that once someone reports a bug, the onus seems to be 99% on the *reporter* to find the exact line of code or commit. Ghad what a repressive method. As a long time kernel tester, I see some problem with the newer "new development model". In the short merge windows, after to much time, there are to many patches. So there are problem to bisect bugs, and to have attention of developers. My impression is that in a week there are many more messages in lkml and to much bugs to be handled in these few days. I've two proposal: - better patch quality. I would like that every commit would compile. So an automatic commit test and public blames could increase the quality of first commits. [bisecting with non compilable point it is not a trivial task] - a slow down the patch inclusion on the merge windows (aka: not to much big changes in the first days). As tester I prefer that some big changes would be included in a "secondary window" (pre o rc release), in an other period as the big patch rush. ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [BUG] New Kernel Bugs
Mark Lord wrote: Ingo Molnar wrote: .. This is all QA-101 that _cannot be argued against on a rational basis_, it's just that these sorts of things have been largely ignored for years, in favor of the all-too-easy open source means many eyeballs and that is our QA answer, which is a _good_ answer but by far not the most intelligent answer! Today many eyeballs is simply not good enough and nature (and other OS projects) will route us around if we dont change. .. QA-101 and many eyeballs are not at all in opposition. The latter is how we find out about bugs on uncommon hardware, and the former is what we need to track them and overall quality. A HUGE problem I have with current efforts, is that once someone reports a bug, the onus seems to be 99% on the *reporter* to find the exact line of code or commit. Ghad what a repressive method. As a long time kernel tester, I see some problem with the newer new development model. In the short merge windows, after to much time, there are to many patches. So there are problem to bisect bugs, and to have attention of developers. My impression is that in a week there are many more messages in lkml and to much bugs to be handled in these few days. I've two proposal: - better patch quality. I would like that every commit would compile. So an automatic commit test and public blames could increase the quality of first commits. [bisecting with non compilable point it is not a trivial task] - a slow down the patch inclusion on the merge windows (aka: not to much big changes in the first days). As tester I prefer that some big changes would be included in a secondary window (pre o rc release), in an other period as the big patch rush. ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LSM conversion to static interface
Jan Engelhardt wrote: On Oct 23 2007 07:44, Giacomo Catenazzi wrote: I do have a pseudo LSM called "multiadm" at http://freshmeat.net/p/multiadm/ , quoting: Policy is dead simple since it is based on UIDs. The UID ranges can be set on module load time or during runtime (sysfs params). This LSM is basically grants extra rights unlike most other LSMs[1], which is why modprobe makes much more sense here. (It also does not have to do any security labelling that would require it to be loaded at boot time already.) But his is against LSM design (and first agreements about LSM): LSM can deny rights, but it should not give extra permissions or bypass standard unix permissions. It is just not feasible to add ACLs to all million files in /home, also because ACLs are limited to around 25 entries. And it is obvious I do not want to have UID 0, because then you cannot distinguish who created what file. So the requirement to the task is to have unique UIDs. The next logical step would be to give capabilities to those UIDs. *Is that wrong*? Who says that only UID 0 is allowed to have all 31 capability bits turned on, and that all non-UID 0 users need to have all 31 capability bits turned off? So, we give caps to the subadmins (which is IMHO a natural task), and then, as per LSM design (wonder where that is written) deny some of the rights that the capabilities raised for subadmins grant, because that is obviously too much. Nothing wrong. I only said that it was against (IIRC) the principle of LSM in kernel (we should only remove capacities). I've nothing against the changing the design or rules. It was only a commentary, to be sure that we know what we do ;-) ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LSM conversion to static interface
Crispin Cowan wrote: Giacomo Catenazzi wrote: What do technical and regulatory differences have "driver/LSM module" that is build-in and one that is modular? It seems to me silly to find difference. A kernel with a new kernel module is a new kernel. *I* understand that, from a security and logic integrity point of view, there is not much difference between a rebuilt-from-source kernel, and a standard kernel from the distro with a new module loaded. However, there is a big difference for other people, depending on their circumstances. * Some people live in organizations where the stock kernel is required, even if you are allowed to load modules. That may not make sense to you, but that doesn't change the rule. [read also the very last commentary: don't take to seriously my arguments] ok, but why simplifying life of company with such silly rule? Are not the same people that required commercial UNIX kernel? So don't worry about internal company rules. In one year a lot of things changes. Anyway it is a good motivation to delay the conversion, if there are really so many external LSM modules used in production environment. (but see next point) * Some people are not comfortable building kernels from source. It doesn't matter how easy *you* think it is, it is a significant barrier to entry for a lot of people. Especially if their day job is systems or security administration, and not kernel hacking. Configuring a new kernel is not "kernel hacking" and IIRC is considered in the very first level of LPI. Anyway where you will find the new module? It should be very specific on the actual kernel installed. I find few differences to distribute a module or a kernel. Distributions have/had a lot of kernels (versions, SMP, processor specific, vserver, xen, readhat, clusteres, ...), so why not distribute a new kernel? > Think of it like device drivers: Linux would be an enterprise > failure if you had to re-compile the kernel from source every > time you added a new kind of device and device driver. This is a frequent argument, but I don't believe it ;-) I see more time this argument that new devices on an enterprise. The real argument is: : Think of it like device drivers: Linux would be an enterprise : failure if you had to *compile* the kernel from source for : *every machine*. Which is a good point to have modules. Is it still a good point to have LSM modules? And to obey the "Sarbanes-Oxley" Don't take me wrong, the above commentaries are not so serious, and my point was not about modules, but why "Sarbanes-Oxley" tell us that new modules are simpler then new kernel. I like kernel without modules, so I want to understand all motivations why people need modules (and this thread showed me other (non-classical) reasons). I know that the modules are necessary in most situation, but I like to see if some reasons can be solved in other ways, so to simplify also the life of "build-in" peoples. ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LSM conversion to static interface
Crispin Cowan wrote: Giacomo Catenazzi wrote: What do technical and regulatory differences have driver/LSM module that is build-in and one that is modular? It seems to me silly to find difference. A kernel with a new kernel module is a new kernel. *I* understand that, from a security and logic integrity point of view, there is not much difference between a rebuilt-from-source kernel, and a standard kernel from the distro with a new module loaded. However, there is a big difference for other people, depending on their circumstances. * Some people live in organizations where the stock kernel is required, even if you are allowed to load modules. That may not make sense to you, but that doesn't change the rule. [read also the very last commentary: don't take to seriously my arguments] ok, but why simplifying life of company with such silly rule? Are not the same people that required commercial UNIX kernel? So don't worry about internal company rules. In one year a lot of things changes. Anyway it is a good motivation to delay the conversion, if there are really so many external LSM modules used in production environment. (but see next point) * Some people are not comfortable building kernels from source. It doesn't matter how easy *you* think it is, it is a significant barrier to entry for a lot of people. Especially if their day job is systems or security administration, and not kernel hacking. Configuring a new kernel is not kernel hacking and IIRC is considered in the very first level of LPI. Anyway where you will find the new module? It should be very specific on the actual kernel installed. I find few differences to distribute a module or a kernel. Distributions have/had a lot of kernels (versions, SMP, processor specific, vserver, xen, readhat, clusteres, ...), so why not distribute a new kernel? Think of it like device drivers: Linux would be an enterprise failure if you had to re-compile the kernel from source every time you added a new kind of device and device driver. This is a frequent argument, but I don't believe it ;-) I see more time this argument that new devices on an enterprise. The real argument is: : Think of it like device drivers: Linux would be an enterprise : failure if you had to *compile* the kernel from source for : *every machine*. Which is a good point to have modules. Is it still a good point to have LSM modules? And to obey the Sarbanes-Oxley Don't take me wrong, the above commentaries are not so serious, and my point was not about modules, but why Sarbanes-Oxley tell us that new modules are simpler then new kernel. I like kernel without modules, so I want to understand all motivations why people need modules (and this thread showed me other (non-classical) reasons). I know that the modules are necessary in most situation, but I like to see if some reasons can be solved in other ways, so to simplify also the life of build-in peoples. ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: LSM conversion to static interface
Jan Engelhardt wrote: On Oct 23 2007 07:44, Giacomo Catenazzi wrote: I do have a pseudo LSM called multiadm at http://freshmeat.net/p/multiadm/ , quoting: Policy is dead simple since it is based on UIDs. The UID ranges can be set on module load time or during runtime (sysfs params). This LSM is basically grants extra rights unlike most other LSMs[1], which is why modprobe makes much more sense here. (It also does not have to do any security labelling that would require it to be loaded at boot time already.) But his is against LSM design (and first agreements about LSM): LSM can deny rights, but it should not give extra permissions or bypass standard unix permissions. It is just not feasible to add ACLs to all million files in /home, also because ACLs are limited to around 25 entries. And it is obvious I do not want prof to have UID 0, because then you cannot distinguish who created what file. So the requirement to the task is to have unique UIDs. The next logical step would be to give capabilities to those UIDs. *Is that wrong*? Who says that only UID 0 is allowed to have all 31 capability bits turned on, and that all non-UID 0 users need to have all 31 capability bits turned off? So, we give caps to the subadmins (which is IMHO a natural task), and then, as per LSM design (wonder where that is written) deny some of the rights that the capabilities raised for subadmins grant, because that is obviously too much. Nothing wrong. I only said that it was against (IIRC) the principle of LSM in kernel (we should only remove capacities). I've nothing against the changing the design or rules. It was only a commentary, to be sure that we know what we do ;-) ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [kbuild-devel] kbuild mailing list has moved
[added the owner of the old list] Sam Ravnborg wrote: The vger postmasters has created linux-kbuild on my request. The old list at sourceforge had a few issues: - it was subscriber-only - it were relying on moderation For me it is ok. It was subscriber-only because this was a very low traffic mailing list (but with daily spam). BTW, IIRC spam problem began only when we moved to sf.net. Anyway, from the beginning there was problem with moderation. I think because this list is to much low-traffic, to high spam/true message ratio and no more active moderators) Considering the nearly inexistent spam in vger, I think it is a good move. ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [kbuild-devel] kbuild mailing list has moved
[added the owner of the old list] Sam Ravnborg wrote: The vger postmasters has created linux-kbuild on my request. The old list at sourceforge had a few issues: - it was subscriber-only - it were relying on moderation For me it is ok. It was subscriber-only because this was a very low traffic mailing list (but with daily spam). BTW, IIRC spam problem began only when we moved to sf.net. Anyway, from the beginning there was problem with moderation. I think because this list is to much low-traffic, to high spam/true message ratio and no more active moderators) Considering the nearly inexistent spam in vger, I think it is a good move. ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: kfree(0) - ok?
Jan Engelhardt wrote: On Aug 14 2007 16:21, Jason Uhlenkott wrote: On Tue, Aug 14, 2007 at 15:55:48 -0700, Arjan van de Ven wrote: NULL is not 0 though. It is. Its representation isn't guaranteed to be all-bits-zero, C guarantees that. Hmm. It depends on your interpretation of "representation". On memory a null pointer can have some bit set. No, see a very recent discussion on austin group list (which list also few machines that don't have all 0-bits null pointer) To clarify, from Rationale of C99, section 6.7.8 "Initialization": : An implementation might conceivably have codes for floating zero : and/or null pointer other than all bits zero. In such a case, : the implementation must fill out an incomplete initializer with : the various appropriate representations of zero; it may not just : fill the area with zero bytes. As far as the committee knows, : all machines treat all bits zero as a representation of : floating-point zero. But, all bits zero might not be the : canonical representation of zero. Anyway, I think for kernel it is safe to assume all-zero bit null pointer. ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: kfree(0) - ok?
Jan Engelhardt wrote: On Aug 14 2007 16:21, Jason Uhlenkott wrote: On Tue, Aug 14, 2007 at 15:55:48 -0700, Arjan van de Ven wrote: NULL is not 0 though. It is. Its representation isn't guaranteed to be all-bits-zero, C guarantees that. Hmm. It depends on your interpretation of representation. On memory a null pointer can have some bit set. No, see a very recent discussion on austin group list (which list also few machines that don't have all 0-bits null pointer) To clarify, from Rationale of C99, section 6.7.8 Initialization: : An implementation might conceivably have codes for floating zero : and/or null pointer other than all bits zero. In such a case, : the implementation must fill out an incomplete initializer with : the various appropriate representations of zero; it may not just : fill the area with zero bytes. As far as the committee knows, : all machines treat all bits zero as a representation of : floating-point zero. But, all bits zero might not be the : canonical representation of zero. Anyway, I think for kernel it is safe to assume all-zero bit null pointer. ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Contributor Agreement/Copyright Assignment
Bhutani Meeta-W19091 wrote: Motorola would like to understand if kernel.org has a contributor agreement (or a copyright assignment agreement) that is posted somewhere ? We are investigating what would be needed from a legal standpoint to possibly contribute in the future. For "kernel.org" you mean linux kernel? In this case: no copyright assignment is required. The changes should be compatible with GPL v2 (as you will find i.e. in http://lxr.linux.no/source/COPYING ) You should confirm the: "Developer's Certificate of Origin 1.1". This is in Documentation/SubmittingPatches ( http://lxr.linux.no/source/Documentation/SubmittingPatches ) Note: IANAL, and I don't speak for other contributors ;-) ciao cate PS: there is already a copyrighted file by motorola: include/net/sctp/constants.h ( http://lxr.linux.no/source/include/net/sctp/constants.h ) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Contributor Agreement/Copyright Assignment
Bhutani Meeta-W19091 wrote: Motorola would like to understand if kernel.org has a contributor agreement (or a copyright assignment agreement) that is posted somewhere ? We are investigating what would be needed from a legal standpoint to possibly contribute in the future. For kernel.org you mean linux kernel? In this case: no copyright assignment is required. The changes should be compatible with GPL v2 (as you will find i.e. in http://lxr.linux.no/source/COPYING ) You should confirm the: Developer's Certificate of Origin 1.1. This is in Documentation/SubmittingPatches ( http://lxr.linux.no/source/Documentation/SubmittingPatches ) Note: IANAL, and I don't speak for other contributors ;-) ciao cate PS: there is already a copyrighted file by motorola: include/net/sctp/constants.h ( http://lxr.linux.no/source/include/net/sctp/constants.h ) - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] "volatile considered harmful" document
Jonathan Corbet wrote: +The volatile storage class was originally meant for memory-mapped I/O +registers. Within the kernel, register accesses, too, should be protected I don't think it deserves to be added in documentation, but just for reference: in userspace "volatile" is needed in signals (posix mandates some variables to be volatile, as API, not as funtionality). I don't know if this was also on the original signal handling. Anyway user space APIs are not kernel problem ;-) ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] volatile considered harmful document
Jonathan Corbet wrote: +The volatile storage class was originally meant for memory-mapped I/O +registers. Within the kernel, register accesses, too, should be protected I don't think it deserves to be added in documentation, but just for reference: in userspace volatile is needed in signals (posix mandates some variables to be volatile, as API, not as funtionality). I don't know if this was also on the original signal handling. Anyway user space APIs are not kernel problem ;-) ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: GPL only modules
Linus Torvalds wrote: > > On Mon, 18 Dec 2006, Alexandre Oliva wrote: >>> In other words, in the GPL, "Program" does NOT mean "binary". Never has. >> Agreed. So what? How does this relate with the point above? >> >> The binary is a Program, as much as the sources are a Program. Both >> forms are subject to copyright law and to the license, in spite of >> http://www.fsfla.org/?q=en/node/128#1 > > Here's how it relates: > - if a program is not a "derived work" of the C library, then it's not >"the program" as defined by the GPLv2 AT ALL. > > In other words, it doesn't matter ONE WHIT whether you use "ld --static" > or "ld" or "mkisofs" - if the program isn't (by copyright law) derived > from glibc, then EVEN IF glibc was under the GPLv2, it would IN NO WAY > AFFECT THE RESULTING BINARY. I really don't agree. It seems you confuse source and binary application. The source surelly is not derived, you can link *any* libc to your program. But a binary is different. Let start with your example about books: you write a book, you have the copyright of the text, but if you publish it with X publiher, he may use a own font. You can read the book, scan it to extract text (I hope fair use allows it), but not copy the book pages: there is your text, but also copyrighted font. Publisher should check that the two license are compatible, as the user that links with a new library. For binary, it is the same. You can extract libraries and rest of programs (better doing with sources), but until it is one binary, it is a new mixed entity. It is not only linking, it is mixing bytes! Some part of library is linked statically, there are some references in the static part of program. It is a mix and until the two part are mixed (not only linked) you should follow both licenses for copying! Choose any dynamic program in your machine, try to link glibc with an other (not directly derived libc) library... you see how it is hard, and it is very different to an "aggregation". And dynamic links is only the latest step of "merging" the two binaries. Other libraries tend to be more "dynamic", but glibc mixes to much In other word, source A, library B: the binary C is derived both from A and B, but surelly A is not derived by B. So IMHO IANAL, in arguments we should not confuse the sources and the binary in the arguments, so not calling simply "the program". ciao cate > > And I'm simply claiming that a binary doesn't become "derived from" by any > action of linking. > > Even if you link using "ld", even if it's static, the binary is not > "derived from". It's an aggregate. > > "Derivation" has nothing to do with "linking". Either it's derived or it > is not, and "linking" simply doesn't matter. It doesn't matter whether > it's static or dynamic. That's a detail that simply doesn't have anythign > at all to do with "derivative work". > > THAT is my point. > > Static vs dynamic matters for whether it's an AGGREGATE work. Clearly, > static linking aggregates the library with the other program in the same > binary. There's no question about that. And that _does_ have meaning from > a copyright law angle, since if you don't have permission to ship > aggregate works under the license, then you can't ship said binary. It's > just a non-issue in the specific case of the GPLv2. > > In the presense of dynamic linking the binary isn't even an aggregate > work. > > THAT is the difference between static and dynamic. A simple command line > flag to the linker shouldn't really reasonably be considered to change > "derivation" status. > > Either something is derived, or it's not. If it's derived, "ld", > "mkisofs", "putting them close together" or "shipping them on totally > separate CD's" doesn't matter. It's still derived. > > Linus - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: GPL only modules
Linus Torvalds wrote: > > On Mon, 18 Dec 2006, Alexandre Oliva wrote: >>> In other words, in the GPL, "Program" does NOT mean "binary". Never has. >> Agreed. So what? How does this relate with the point above? >> >> The binary is a Program, as much as the sources are a Program. Both >> forms are subject to copyright law and to the license, in spite of >> http://www.fsfla.org/?q=en/node/128#1 > > Here's how it relates: > - if a program is not a "derived work" of the C library, then it's not >"the program" as defined by the GPLv2 AT ALL. > > In other words, it doesn't matter ONE WHIT whether you use "ld --static" > or "ld" or "mkisofs" - if the program isn't (by copyright law) derived > from glibc, then EVEN IF glibc was under the GPLv2, it would IN NO WAY > AFFECT THE RESULTING BINARY. I really don't agree. It seems you confuse source and binary application. The source surelly is not derived, you can link *any* libc to your program. But a binary is different. Let start with your example about books: you write a book, you have the copyright of the text, but if you publish it with X publiher, he may use a own font. You can read the book, scan it to extract text (I hope fair use allows it), but not copy the book pages: there is your text, but also copyrighted font. Publisher should check that the two license are compatible, as the user that links with a new library. For binary, it is the same. You can extract libraries and rest of programs (better doing with sources), but until it is one binary, it is a new mixed entity. It is not only linking, it is mixing bytes! Some part of library is linked statically, there are some references in the static part of program. It is a mix and until the two part are mixed (not only linked) you should follow both licenses for copying! Choose any dynamic program in your machine, try to link glibc with an other (not directly derived libc) library... you see how it is hard, and it is very different to an "aggregation". And dynamic links is only the latest step of "merging" the two binaries. Other libraries tend to be more "dynamic", but glibc mixes to much In other word, source A, library B: the binary C is derived both from A and B, but surelly A is not derived by B. So IMHO IANAL, in arguments we should not confuse the sources and the binary in the arguments, so not calling simply "the program". ciao cate > > And I'm simply claiming that a binary doesn't become "derived from" by any > action of linking. > > Even if you link using "ld", even if it's static, the binary is not > "derived from". It's an aggregate. > > "Derivation" has nothing to do with "linking". Either it's derived or it > is not, and "linking" simply doesn't matter. It doesn't matter whether > it's static or dynamic. That's a detail that simply doesn't have anythign > at all to do with "derivative work". > > THAT is my point. > > Static vs dynamic matters for whether it's an AGGREGATE work. Clearly, > static linking aggregates the library with the other program in the same > binary. There's no question about that. And that _does_ have meaning from > a copyright law angle, since if you don't have permission to ship > aggregate works under the license, then you can't ship said binary. It's > just a non-issue in the specific case of the GPLv2. > > In the presense of dynamic linking the binary isn't even an aggregate > work. > > THAT is the difference between static and dynamic. A simple command line > flag to the linker shouldn't really reasonably be considered to change > "derivation" status. > > Either something is derived, or it's not. If it's derived, "ld", > "mkisofs", "putting them close together" or "shipping them on totally > separate CD's" doesn't matter. It's still derived. > > Linus - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: GPL only modules
Linus Torvalds wrote: On Mon, 18 Dec 2006, Alexandre Oliva wrote: In other words, in the GPL, Program does NOT mean binary. Never has. Agreed. So what? How does this relate with the point above? The binary is a Program, as much as the sources are a Program. Both forms are subject to copyright law and to the license, in spite of http://www.fsfla.org/?q=en/node/128#1 Here's how it relates: - if a program is not a derived work of the C library, then it's not the program as defined by the GPLv2 AT ALL. In other words, it doesn't matter ONE WHIT whether you use ld --static or ld or mkisofs - if the program isn't (by copyright law) derived from glibc, then EVEN IF glibc was under the GPLv2, it would IN NO WAY AFFECT THE RESULTING BINARY. I really don't agree. It seems you confuse source and binary application. The source surelly is not derived, you can link *any* libc to your program. But a binary is different. Let start with your example about books: you write a book, you have the copyright of the text, but if you publish it with X publiher, he may use a own font. You can read the book, scan it to extract text (I hope fair use allows it), but not copy the book pages: there is your text, but also copyrighted font. Publisher should check that the two license are compatible, as the user that links with a new library. For binary, it is the same. You can extract libraries and rest of programs (better doing with sources), but until it is one binary, it is a new mixed entity. It is not only linking, it is mixing bytes! Some part of library is linked statically, there are some references in the static part of program. It is a mix and until the two part are mixed (not only linked) you should follow both licenses for copying! Choose any dynamic program in your machine, try to link glibc with an other (not directly derived libc) library... you see how it is hard, and it is very different to an aggregation. And dynamic links is only the latest step of merging the two binaries. Other libraries tend to be more dynamic, but glibc mixes to much In other word, source A, library B: the binary C is derived both from A and B, but surelly A is not derived by B. So IMHO IANAL, in arguments we should not confuse the sources and the binary in the arguments, so not calling simply the program. ciao cate And I'm simply claiming that a binary doesn't become derived from by any action of linking. Even if you link using ld, even if it's static, the binary is not derived from. It's an aggregate. Derivation has nothing to do with linking. Either it's derived or it is not, and linking simply doesn't matter. It doesn't matter whether it's static or dynamic. That's a detail that simply doesn't have anythign at all to do with derivative work. THAT is my point. Static vs dynamic matters for whether it's an AGGREGATE work. Clearly, static linking aggregates the library with the other program in the same binary. There's no question about that. And that _does_ have meaning from a copyright law angle, since if you don't have permission to ship aggregate works under the license, then you can't ship said binary. It's just a non-issue in the specific case of the GPLv2. In the presense of dynamic linking the binary isn't even an aggregate work. THAT is the difference between static and dynamic. A simple command line flag to the linker shouldn't really reasonably be considered to change derivation status. Either something is derived, or it's not. If it's derived, ld, mkisofs, putting them close together or shipping them on totally separate CD's doesn't matter. It's still derived. Linus - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: GPL only modules
Linus Torvalds wrote: On Mon, 18 Dec 2006, Alexandre Oliva wrote: In other words, in the GPL, Program does NOT mean binary. Never has. Agreed. So what? How does this relate with the point above? The binary is a Program, as much as the sources are a Program. Both forms are subject to copyright law and to the license, in spite of http://www.fsfla.org/?q=en/node/128#1 Here's how it relates: - if a program is not a derived work of the C library, then it's not the program as defined by the GPLv2 AT ALL. In other words, it doesn't matter ONE WHIT whether you use ld --static or ld or mkisofs - if the program isn't (by copyright law) derived from glibc, then EVEN IF glibc was under the GPLv2, it would IN NO WAY AFFECT THE RESULTING BINARY. I really don't agree. It seems you confuse source and binary application. The source surelly is not derived, you can link *any* libc to your program. But a binary is different. Let start with your example about books: you write a book, you have the copyright of the text, but if you publish it with X publiher, he may use a own font. You can read the book, scan it to extract text (I hope fair use allows it), but not copy the book pages: there is your text, but also copyrighted font. Publisher should check that the two license are compatible, as the user that links with a new library. For binary, it is the same. You can extract libraries and rest of programs (better doing with sources), but until it is one binary, it is a new mixed entity. It is not only linking, it is mixing bytes! Some part of library is linked statically, there are some references in the static part of program. It is a mix and until the two part are mixed (not only linked) you should follow both licenses for copying! Choose any dynamic program in your machine, try to link glibc with an other (not directly derived libc) library... you see how it is hard, and it is very different to an aggregation. And dynamic links is only the latest step of merging the two binaries. Other libraries tend to be more dynamic, but glibc mixes to much In other word, source A, library B: the binary C is derived both from A and B, but surelly A is not derived by B. So IMHO IANAL, in arguments we should not confuse the sources and the binary in the arguments, so not calling simply the program. ciao cate And I'm simply claiming that a binary doesn't become derived from by any action of linking. Even if you link using ld, even if it's static, the binary is not derived from. It's an aggregate. Derivation has nothing to do with linking. Either it's derived or it is not, and linking simply doesn't matter. It doesn't matter whether it's static or dynamic. That's a detail that simply doesn't have anythign at all to do with derivative work. THAT is my point. Static vs dynamic matters for whether it's an AGGREGATE work. Clearly, static linking aggregates the library with the other program in the same binary. There's no question about that. And that _does_ have meaning from a copyright law angle, since if you don't have permission to ship aggregate works under the license, then you can't ship said binary. It's just a non-issue in the specific case of the GPLv2. In the presense of dynamic linking the binary isn't even an aggregate work. THAT is the difference between static and dynamic. A simple command line flag to the linker shouldn't really reasonably be considered to change derivation status. Either something is derived, or it's not. If it's derived, ld, mkisofs, putting them close together or shipping them on totally separate CD's doesn't matter. It's still derived. Linus - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Postgrey experiment at VGER
Al Boldi wrote: Trond Myklebust wrote: On Wed, 2006-12-13 at 11:25 +0200, Dumitru Ciobarcianu wrote: On Wed, 2006-12-13 at 01:50 +0200, Matti Aarnio wrote: I do already see spammers smart enough to retry addresses from the zombie machine, but that share is now below 10% of all emails. My prediction for next 200 days is that most spammers get the clue, but it gives us perhaps 3 months of less leaked junk. Great! IMHO this is only an step in an "arms race". What you will do in three months, remove this check because it will prove useless since the spammers will also retry ? If yes, why install it in the first place ? Why ever do anything? You're going to die eventually anyway... Right! The problem here is that it may do more harm than good. May I suggest a smarter way to filter these spammers, by just whitelisting email addresses of valid posters, after sending a confirmation for the first post. Now if these spammers get smart, and start using personal email addresses, I would certainly expect some real action by abused email address owners. So a challange to the kernel hackers: build a mail filtering/proxy system, a' la BSD. I don't remember the specification and features, but IIRC the netfilter is not enough to do the graylisting (but pf was). Someone has some hints what kernel can do in the fight against spam? ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Postgrey experiment at VGER
Al Boldi wrote: Trond Myklebust wrote: On Wed, 2006-12-13 at 11:25 +0200, Dumitru Ciobarcianu wrote: On Wed, 2006-12-13 at 01:50 +0200, Matti Aarnio wrote: I do already see spammers smart enough to retry addresses from the zombie machine, but that share is now below 10% of all emails. My prediction for next 200 days is that most spammers get the clue, but it gives us perhaps 3 months of less leaked junk. Great! IMHO this is only an step in an arms race. What you will do in three months, remove this check because it will prove useless since the spammers will also retry ? If yes, why install it in the first place ? Why ever do anything? You're going to die eventually anyway... Right! The problem here is that it may do more harm than good. May I suggest a smarter way to filter these spammers, by just whitelisting email addresses of valid posters, after sending a confirmation for the first post. Now if these spammers get smart, and start using personal email addresses, I would certainly expect some real action by abused email address owners. So a challange to the kernel hackers: build a mail filtering/proxy system, a' la BSD. I don't remember the specification and features, but IIRC the netfilter is not enough to do the graylisting (but pf was). Someone has some hints what kernel can do in the fight against spam? ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
oops in 2.6.11: "XFree86[2780] exited with preempt_count 1"
An oops in last kernel. I happens on early shutdown. I had two other oops in last week (with latest bk tree), but I was hard crash (still in X) and without any logs. Because of these thee crash (over some more restart), I think the bug is probably reproducible. BTW, there is an extra space before "<6>note" Mar 7 08:54:20 catee kernel: [ cut here ] Mar 7 08:54:20 catee kernel: PREEMPT SMP Mar 7 08:54:20 catee kernel: Modules linked in: iptable_mangle ipt_TOS ehci_hcd Mar 7 08:54:20 catee kernel: CPU:2 Mar 7 08:54:20 catee kernel: EIP:0060:[page_remove_rmap+42/64]Not tainted VLI Mar 7 08:54:20 catee kernel: EFLAGS: 00013286 (2.6.11-rc5) Mar 7 08:54:20 catee kernel: EIP is at page_remove_rmap+0x2a/0x40 Mar 7 08:54:20 catee kernel: eax: ebx: c1f20ee0 ecx: edx: c1fb9860 Mar 7 08:54:20 catee kernel: esi: f72b3a7c edi: fffd7000 ebp: c1fb9860 esp: f71d0e98 Mar 7 08:54:20 catee kernel: ds: 007b es: 007b ss: 0068 Mar 7 08:54:20 catee kernel: Process XFree86 (pid: 2780, threadinfo=f71d task=f7239530) Mar 7 08:54:20 catee kernel: Stack: c01462bc f7239530 f773c5c0 fffd6000 fffd7000 f773c5c0 0869f298 f7636804 Mar 7 08:54:20 catee kernel:f773c580 f75ed084 f773c580 0869f298 f773c5c0 c01471f7 f72b3a7c f75ed084 Mar 7 08:54:20 catee kernel:7dcc3065 0001 f7636804 f773c580 f773c5ac f7636804 f7239530 c0114fe4 Mar 7 08:54:20 catee kernel: Call Trace: Mar 7 08:54:20 catee kernel: [do_wp_page+540/752] do_wp_page+0x21c/0x2f0 Mar 7 08:54:20 catee kernel: [handle_mm_fault+311/336] handle_mm_fault+0x137/0x150 Mar 7 08:54:20 catee kernel: [do_page_fault+388/1394] do_page_fault+0x184/0x572 Mar 7 08:54:20 catee kernel: [do_signal+187/288] do_signal+0xbb/0x120 Mar 7 08:54:20 catee kernel: [__wake_up+56/80] __wake_up+0x38/0x50 Mar 7 08:54:20 catee kernel: [unix_release_sock+352/576] unix_release_sock+0x160/0x240 Mar 7 08:54:20 catee kernel: [invalidate_inode_buffers+21/144] invalidate_inode_buffers+0x15/0x90 Mar 7 08:54:20 catee kernel: [clear_inode+18/208] clear_inode+0x12/0xd0 Mar 7 08:54:20 catee kernel: [destroy_inode+53/64] destroy_inode+0x35/0x40 Mar 7 08:54:20 catee kernel: [dput+30/416] dput+0x1e/0x1a0 Mar 7 08:54:20 catee kernel: [__fput+185/288] __fput+0xb9/0x120 Mar 7 08:54:20 catee kernel: [filp_close+79/128] filp_close+0x4f/0x80 Mar 7 08:54:20 catee kernel: [do_page_fault+0/1394] do_page_fault+0x0/0x572 Mar 7 08:54:20 catee kernel: [error_code+43/48] error_code+0x2b/0x30 Mar 7 08:54:20 catee kernel: Code: 00 89 c2 8b 00 f6 c4 08 75 2c f0 83 42 08 ff 0f 98 c0 84 c0 74 1f 8b 42 08 40 78 0f ba ff ff ff ff b8 10 00 00 00 e9 66 0a ff ff <0f> 0b e2 01 a1 b4 37 c0 eb e7 c3 0f 0b df 01 a1 b4 37 c0 eb ca Mar 7 08:54:20 catee kernel: <6>note: XFree86[2780] exited with preempt_count 1 ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
oops in 2.6.11: XFree86[2780] exited with preempt_count 1
An oops in last kernel. I happens on early shutdown. I had two other oops in last week (with latest bk tree), but I was hard crash (still in X) and without any logs. Because of these thee crash (over some more restart), I think the bug is probably reproducible. BTW, there is an extra space before 6note Mar 7 08:54:20 catee kernel: [ cut here ] Mar 7 08:54:20 catee kernel: PREEMPT SMP Mar 7 08:54:20 catee kernel: Modules linked in: iptable_mangle ipt_TOS ehci_hcd Mar 7 08:54:20 catee kernel: CPU:2 Mar 7 08:54:20 catee kernel: EIP:0060:[page_remove_rmap+42/64]Not tainted VLI Mar 7 08:54:20 catee kernel: EFLAGS: 00013286 (2.6.11-rc5) Mar 7 08:54:20 catee kernel: EIP is at page_remove_rmap+0x2a/0x40 Mar 7 08:54:20 catee kernel: eax: ebx: c1f20ee0 ecx: edx: c1fb9860 Mar 7 08:54:20 catee kernel: esi: f72b3a7c edi: fffd7000 ebp: c1fb9860 esp: f71d0e98 Mar 7 08:54:20 catee kernel: ds: 007b es: 007b ss: 0068 Mar 7 08:54:20 catee kernel: Process XFree86 (pid: 2780, threadinfo=f71d task=f7239530) Mar 7 08:54:20 catee kernel: Stack: c01462bc f7239530 f773c5c0 fffd6000 fffd7000 f773c5c0 0869f298 f7636804 Mar 7 08:54:20 catee kernel:f773c580 f75ed084 f773c580 0869f298 f773c5c0 c01471f7 f72b3a7c f75ed084 Mar 7 08:54:20 catee kernel:7dcc3065 0001 f7636804 f773c580 f773c5ac f7636804 f7239530 c0114fe4 Mar 7 08:54:20 catee kernel: Call Trace: Mar 7 08:54:20 catee kernel: [do_wp_page+540/752] do_wp_page+0x21c/0x2f0 Mar 7 08:54:20 catee kernel: [handle_mm_fault+311/336] handle_mm_fault+0x137/0x150 Mar 7 08:54:20 catee kernel: [do_page_fault+388/1394] do_page_fault+0x184/0x572 Mar 7 08:54:20 catee kernel: [do_signal+187/288] do_signal+0xbb/0x120 Mar 7 08:54:20 catee kernel: [__wake_up+56/80] __wake_up+0x38/0x50 Mar 7 08:54:20 catee kernel: [unix_release_sock+352/576] unix_release_sock+0x160/0x240 Mar 7 08:54:20 catee kernel: [invalidate_inode_buffers+21/144] invalidate_inode_buffers+0x15/0x90 Mar 7 08:54:20 catee kernel: [clear_inode+18/208] clear_inode+0x12/0xd0 Mar 7 08:54:20 catee kernel: [destroy_inode+53/64] destroy_inode+0x35/0x40 Mar 7 08:54:20 catee kernel: [dput+30/416] dput+0x1e/0x1a0 Mar 7 08:54:20 catee kernel: [__fput+185/288] __fput+0xb9/0x120 Mar 7 08:54:20 catee kernel: [filp_close+79/128] filp_close+0x4f/0x80 Mar 7 08:54:20 catee kernel: [do_page_fault+0/1394] do_page_fault+0x0/0x572 Mar 7 08:54:20 catee kernel: [error_code+43/48] error_code+0x2b/0x30 Mar 7 08:54:20 catee kernel: Code: 00 89 c2 8b 00 f6 c4 08 75 2c f0 83 42 08 ff 0f 98 c0 84 c0 74 1f 8b 42 08 40 78 0f ba ff ff ff ff b8 10 00 00 00 e9 66 0a ff ff 0f 0b e2 01 a1 b4 37 c0 eb e7 c3 0f 0b df 01 a1 b4 37 c0 eb ca Mar 7 08:54:20 catee kernel: 6note: XFree86[2780] exited with preempt_count 1 ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: kernel BUG at mm/rmap.c:483!
Arjan van de Ven wrote: but what what's the penalty of preventing microcode from loading? a performance hit? not even that; in theory a few cpu bugs may have been fixed. Nobody really knows since there's no changelog for the microcode.. You can see the processor bugs in intel website, i.e.: ftp://download.intel.com/design/Xeon/specupdt/24967847.pdf The following sentence (IMHO) meens that bug is corrected in microcode: "Workaround: It is possible for the BIOS to contain a workaround for this erratum." ciao cate - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: kernel BUG at mm/rmap.c:483!
Arjan van de Ven wrote: but what what's the penalty of preventing microcode from loading? a performance hit? not even that; in theory a few cpu bugs may have been fixed. Nobody really knows since there's no changelog for the microcode.. You can see the processor bugs in intel website, i.e.: ftp://download.intel.com/design/Xeon/specupdt/24967847.pdf The following sentence (IMHO) meens that bug is corrected in microcode: Workaround: It is possible for the BIOS to contain a workaround for this erratum. ciao cate - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Requirement of make oldconfig [was: Re: [kbuild-devel] Re: CML2 1.3.1, aka ...]
"Eric S. Raymond" wrote: > > Peter Samuelson <[EMAIL PROTECTED]>: > > [esr] > > > Besides, right now the configurator has a simple invariant. It will > > > only accept consistent configurations > > > > So you are saying that the old 'vi .config; make oldconfig' trick is > > officially unsupported? That's too bad, it was quite handy. > > Depends on how you define `unsupported'. Make oldconfig will tell you > exactly and unambiguously what was wrong with the configuration. I think > if you're hard-core enough to vi your config, you're hard-core enough to > interpret and act on > > This configuration violates the following constraints: > (X86 and SMP==y) implies RTC!=n > > without needing some wussy GUI holding your hand :-). I think that a fundamental requirment is that 'make oldconfig' should validate any configurations (also the wrong conf). (If you correct your rules, our old .config can be invalid on a new kernel, and we don't want regualary edit our .config). My proposal is instaed of complain about configuration violatation, you just wrote the possible correct configuration and prompt user to select the correct configuration. In the case you cite, e.g. oldconfig shoud prompt: 1) SMP=n 2) RTC=m 3) RTC=y (assuming the ARCH is invariant). To simplify your life you can require only tty (or ev. also menu mode) for there question. User normally use oldconfig in tty mode for simplicity (there are normally only few questions, thus is simple to have the question already in order, without to perse nearly empy menus). giacomo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Requirement of make oldconfig [was: Re: [kbuild-devel] Re: CML2 1.3.1, aka ...]
Eric S. Raymond wrote: Peter Samuelson [EMAIL PROTECTED]: [esr] Besides, right now the configurator has a simple invariant. It will only accept consistent configurations So you are saying that the old 'vi .config; make oldconfig' trick is officially unsupported? That's too bad, it was quite handy. Depends on how you define `unsupported'. Make oldconfig will tell you exactly and unambiguously what was wrong with the configuration. I think if you're hard-core enough to vi your config, you're hard-core enough to interpret and act on This configuration violates the following constraints: (X86 and SMP==y) implies RTC!=n without needing some wussy GUI holding your hand :-). I think that a fundamental requirment is that 'make oldconfig' should validate any configurations (also the wrong conf). (If you correct your rules, our old .config can be invalid on a new kernel, and we don't want regualary edit our .config). My proposal is instaed of complain about configuration violatation, you just wrote the possible correct configuration and prompt user to select the correct configuration. In the case you cite, e.g. oldconfig shoud prompt: 1) SMP=n 2) RTC=m 3) RTC=y (assuming the ARCH is invariant). To simplify your life you can require only tty (or ev. also menu mode) for there question. User normally use oldconfig in tty mode for simplicity (there are normally only few questions, thus is simple to have the question already in order, without to perse nearly empy menus). giacomo - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [kbuild-devel] Request for comment -- a better attribution system
"Eric S. Raymond" wrote: > > This is a proposal for an attribution metadata system in the Linux kernel > sources. The goal of the system is to make it easy for people reading > any given piece of code to identify the responsible maintainer. The motivation > for this proposal is that the present system, a single top-level MAINTAINERS > file, doesn't seem to be scaling well. > > In this system, most files will contain a "map block". A map block is a > metadata section embedded in a comment near the beginning of the file. > Here is an example map block for my kxref.py tool: > Good! > And here's what a map block should look like in general: > > %Map: > T: Description of this unit for map purposes > P: Person > M: Mail patches to > L: Mailing list that is relevant to this area > W: Web-page with status/info > C: Controlling configuration symbol > D: Date this meta-info was last updated > S: Status, one of the following: > There may be more than one P: field per map block. There should be exactly one > M: field. > > The D: field may have the special value `None' meaining that this map block > was translated from old information which has not yet been confirmed with the > responsible maintainer. > > Note that this is the same set of conventions presently used in the > MAINTAINERS file, with only the T:, D:, and C: fields being new. The > contents of the C: field, if present, should be the name of the > CONFIG_ symbol that controls the inclusion of this unit in a kernel. > > (Map blocks are terminated by a blank line.) > We should use the same filed name of CREDITS e.g. D: for Description. (maybe you can invert D: description and T: time of last update) It whould nice also if we include the type of the license (GPL,...). This for a fast parsing (and maybe also to replace the few lines of license) Instead of C: it is more important (IMHO) to include the module name. Maybe we can include both (modules name are always lower case). I think that the inclusion of the config option is not important ( considering that it can be easily parsed from the kaos' new makefiles). giacomo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [kbuild-devel] Request for comment -- a better attribution system
"Eric S. Raymond" wrote: This is a proposal for an attribution metadata system in the Linux kernel sources. The goal of the system is to make it easy for people reading any given piece of code to identify the responsible maintainer. The motivation for this proposal is that the present system, a single top-level MAINTAINERS file, doesn't seem to be scaling well. In this system, most files will contain a "map block". A map block is a metadata section embedded in a comment near the beginning of the file. Here is an example map block for my kxref.py tool: Good! And here's what a map block should look like in general: %Map: T: Description of this unit for map purposes P: Person M: Mail patches to L: Mailing list that is relevant to this area W: Web-page with status/info C: Controlling configuration symbol D: Date this meta-info was last updated S: Status, one of the following: There may be more than one P: field per map block. There should be exactly one M: field. The D: field may have the special value `None' meaining that this map block was translated from old information which has not yet been confirmed with the responsible maintainer. Note that this is the same set of conventions presently used in the MAINTAINERS file, with only the T:, D:, and C: fields being new. The contents of the C: field, if present, should be the name of the CONFIG_ symbol that controls the inclusion of this unit in a kernel. (Map blocks are terminated by a blank line.) We should use the same filed name of CREDITS e.g. D: for Description. (maybe you can invert D: description and T: time of last update) It whould nice also if we include the type of the license (GPL,...). This for a fast parsing (and maybe also to replace the few lines of license) Instead of C: it is more important (IMHO) to include the module name. Maybe we can include both (modules name are always lower case). I think that the inclusion of the config option is not important ( considering that it can be easily parsed from the kaos' new makefiles). giacomo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Coppermine is a PIII or a Celeron? WINCHIP2/WINCHIP3D diff?
Hello! When working in cpu autoconfiguration I found some problems: I have to identify this processor: Vendor: Intel Family: 6 Model: 8 Is it a "Pentium III (Coppermine)" (setup.c:1709) or a "Celeron (Coppermine)" (setup.c:1650) ? What is the difference between MWINCHIP2 and MWINCHIP3D? I don't find differences in the sources giacomo Version 3 of my cpu detection and configuration: diff -uNr old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ diff -uNr old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" >> .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y >> .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -uNr old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -uNr old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Tue Jan 2 10:29:31 2001 @@ -0,0 +1,81 @@ +#! /bin/bash + +# Copyright (C) 2000-2001 Giacomo Catenazzi <[EMAIL PROTECTED]> +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + +# Version 3 + + +function check_cpu () { +if echo "$cpu_id" | egrep -e "$1" ; then + # CPU detected + echo $2 + exit 0 +fi +} + + +### i386 ### + +if [ "$ARCH" = "i386" ] ; then + +if [ ! -r /proc/cpuinfo ] ; then + echo "cpu_detect: Could not read /proc/cpuinfo" 1>&2 + echo CONFIG_M386 + exit 2 +fi + + vendor=$(sed -n 's/^vendor_id.*: *\([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: *\([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: *\([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_name=$(sed -n 's/^model name.*: *\(.*\)$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod:$cpu_name" + +#echo $cpu_id # for debug + +check_cpu '^GenuineIntel:.*:.*:Pentium [67]' CONFIG_M586TSC +check_cpu '^GenuineIntel:.*:.*:Pentium MMX' CONFIG_M586MMX +check_cpu '^GenuineIntel:.*:.*:Pentium Pro' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Pentium II\>' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Celeron' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Pentium III' CONFIG_M686FXSR +check_cpu '^GenuineIntel:.*:.*:Pentium IV'CONFIG_MPENTIUM4 # ??? +check_cpu '^GenuineIntel:4:.*:' CONFIG_M486 +check_cpu '^GenuineIntel:5:[01237]:' CONFIG_M586TSC +check_cpu '^GenuineIntel:5:[48]:' CONFIG_M586MMX +check_cpu '^G
Coppermine is a PIII or a Celeron? WINCHIP2/WINCHIP3D diff?
Hello! When working in cpu autoconfiguration I found some problems: I have to identify this processor: Vendor: Intel Family: 6 Model: 8 Is it a "Pentium III (Coppermine)" (setup.c:1709) or a "Celeron (Coppermine)" (setup.c:1650) ? What is the difference between MWINCHIP2 and MWINCHIP3D? I don't find differences in the sources giacomo Version 3 of my cpu detection and configuration: diff -uNr old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ diff -uNr old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -uNr old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -uNr old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Tue Jan 2 10:29:31 2001 @@ -0,0 +1,81 @@ +#! /bin/bash + +# Copyright (C) 2000-2001 Giacomo Catenazzi [EMAIL PROTECTED] +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + +# Version 3 + + +function check_cpu () { +if echo "$cpu_id" | egrep -e "$1" ; then + # CPU detected + echo $2 + exit 0 +fi +} + + +### i386 ### + +if [ "$ARCH" = "i386" ] ; then + +if [ ! -r /proc/cpuinfo ] ; then + echo "cpu_detect: Could not read /proc/cpuinfo" 12 + echo CONFIG_M386 + exit 2 +fi + + vendor=$(sed -n 's/^vendor_id.*: *\([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: *\([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: *\([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_name=$(sed -n 's/^model name.*: *\(.*\)$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod:$cpu_name" + +#echo $cpu_id # for debug + +check_cpu '^GenuineIntel:.*:.*:Pentium [67]' CONFIG_M586TSC +check_cpu '^GenuineIntel:.*:.*:Pentium MMX' CONFIG_M586MMX +check_cpu '^GenuineIntel:.*:.*:Pentium Pro' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Pentium II\' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Celeron' CONFIG_M686 +check_cpu '^GenuineIntel:.*:.*:Pentium III' CONFIG_M686FXSR +check_cpu '^GenuineIntel:.*:.*:Pentium IV'CONFIG_MPENTIUM4 # ??? +check_cpu '^GenuineIntel:4:.*:' CONFIG_M486 +check_cpu '^GenuineIntel:5:[01237]:' CONFIG_M586TSC +check_cpu '^GenuineIntel:5:[48]:' CONFIG_M586MMX +check_cpu '^GenuineIntel:6:[01356]:'
Re:[PATCH, v2] Processor autodetection (when configuring kernel)
Version 2: . Added a PIII . Corrected the name of Crusoe . Added the generic Intel and AMD 486 . Corrected the braces {,} (wrong syntax) giacomo diff -urN old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ diff -urN old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" >> .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y >> .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -urN old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -urN old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Fri Dec 29 17:10:23 2000 @@ -0,0 +1,38 @@ +#! /bin/bash + +# Copyright (C) 2000 Giacomo Catenazzi <[EMAIL PROTECTED]> +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + + +if [ "$ARCH" = "i386" ] ; then + vendor=$(sed -n 's/^vendor_id.*: \([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod" + + #echo $cpu_id # for debug + + case $cpu_id in +GenuineIntel:4:*) echo CONFIG_M486 ;; # exists ? +GenuineIntel:5:[0123] ) echo CONFIG_M586TSC ;; +GenuineIntel:5:[48] ) echo CONFIG_M586MMX ;; +GenuineIntel:6:[01356] ) echo CONFIG_M686 ;; +GenuineIntel:6:[789]) echo CONFIG_M686FXSR ;; +GenuineIntel:6:1[1] ) echo CONFIG_M686FXSR ;; +AuthenticAMD:4:*) echo CONFIG_M486 ;; +AuthenticAMD:5:[0123] ) echo CONFIG_M586 ;; +AuthenticAMD:5:[89] ) echo CONFIG_MK6 ;; +AuthenticAMD:5:1[01]) echo CONFIG_MK6 ;; +AuthenticAMD:6:[0124] ) echo CONFIG_MK7 ;; +UMC:4:[12] ) echo CONFIG_M486 ;; # "UMC" ! +NexGenDriven:5:0) echo CONFIG_M386 ;; +TransmetaCPU:* ) echo CONFIG_MCRUSOE ;; +GenuineTMx86:* ) echo CONFIG_MCRUSOE ;; + +# default value +* ) echo CONFIG_M386 ;; + esac +fi - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[PATCH] Processor autodetection (when configuring kernel)
Hi Linus! Here a first try to autodetect the processor when configure kernel. How it works: 1) I add a CONFIG_CPU_CURRENT boolean. 2) If it is set, the next Makefile will call script/cpu_detect.sh and try to detect the processor (it return CONFIG_M386 if it fails) 3) Makefile sets the autodetected processor and CONFIG_CPU_CURRENT=n, so next time it will use the already detected CPU 4) it does a make oldconfig to configure the the other processor flags ( and to be sure that it will be a correct configuration file) 4) The GNU make will automagically restart makefile, bacause and include (include .config) is changed. I don't know well the non-intel processors, so addition for extra processors are welcome. giacomo The patch = diff -urN old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" >> .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y >> .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -urN old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -urN old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Fri Dec 29 14:10:42 2000 @@ -0,0 +1,35 @@ +#! /bin/bash + +# Copyright (C) 2000 Giacomo Catenazzi <[EMAIL PROTECTED]> +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + + +if [ "$ARCH" = "i386" ] ; then + vendor=$(sed -n 's/^vendor_id.*: \([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod" + + #echo $cpu_id # for debug + + case $cpu_id in +GenuineIntel:5:[0123] ) echo CONFIG_M586TSC ;; +GenuineIntel:5:[48]) echo CONFIG_M586MMX ;; +GenuineIntel:6:[01356] ) echo CONFIG_M686 ;; +GenuineIntel:6:{8,9,11}) echo CONFIG_M686FXSR ;; +AuthenticAMD:5:[0123] ) echo CONFIG_M586 ;; +AuthenticAMD:5:{8,9,10,11} ) echo CONFIG_MK6 ;; +AuthenticAMD:6:[0124] ) echo CONFIG_MK7 ;; +UMC:4:[12] ) echo CONFIG_M486 ;; # "UMC" ! +NexGenDriven:5:0 ) echo CONFIG_M386 ;; +{TransmetaCPU,GenuineTMx86}:* ) echo CONFIG_MCROSUE ;; + +# some default values + +* ) echo CONFIG_M386 ;; + + esac +fi diff -urN old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[PATCH] Processor autodetection (when configuring kernel)
Hi Linus! Here a first try to autodetect the processor when configure kernel. How it works: 1) I add a CONFIG_CPU_CURRENT boolean. 2) If it is set, the next Makefile will call script/cpu_detect.sh and try to detect the processor (it return CONFIG_M386 if it fails) 3) Makefile sets the autodetected processor and CONFIG_CPU_CURRENT=n, so next time it will use the already detected CPU 4) it does a make oldconfig to configure the the other processor flags ( and to be sure that it will be a correct configuration file) 4) The GNU make will automagically restart makefile, bacause and include (include .config) is changed. I don't know well the non-intel processors, so addition for extra processors are welcome. giacomo The patch = diff -urN old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -urN old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -urN old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Fri Dec 29 14:10:42 2000 @@ -0,0 +1,35 @@ +#! /bin/bash + +# Copyright (C) 2000 Giacomo Catenazzi [EMAIL PROTECTED] +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + + +if [ "$ARCH" = "i386" ] ; then + vendor=$(sed -n 's/^vendor_id.*: \([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod" + + #echo $cpu_id # for debug + + case $cpu_id in +GenuineIntel:5:[0123] ) echo CONFIG_M586TSC ;; +GenuineIntel:5:[48]) echo CONFIG_M586MMX ;; +GenuineIntel:6:[01356] ) echo CONFIG_M686 ;; +GenuineIntel:6:{8,9,11}) echo CONFIG_M686FXSR ;; +AuthenticAMD:5:[0123] ) echo CONFIG_M586 ;; +AuthenticAMD:5:{8,9,10,11} ) echo CONFIG_MK6 ;; +AuthenticAMD:6:[0124] ) echo CONFIG_MK7 ;; +UMC:4:[12] ) echo CONFIG_M486 ;; # "UMC" ! +NexGenDriven:5:0 ) echo CONFIG_M386 ;; +{TransmetaCPU,GenuineTMx86}:* ) echo CONFIG_MCROSUE ;; + +# some default values + +* ) echo CONFIG_M386 ;; + + esac +fi diff -urN old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re:[PATCH, v2] Processor autodetection (when configuring kernel)
Version 2: . Added a PIII . Corrected the name of Crusoe . Added the generic Intel and AMD 486 . Corrected the braces {,} (wrong syntax) giacomo diff -urN old.linux/CREDITS linux/CREDITS --- old.linux/CREDITS Fri Dec 29 13:32:46 2000 +++ linux/CREDITS Fri Dec 29 13:43:02 2000 @@ -458,6 +458,12 @@ S: Fremont, California 94539 S: USA +N: Giacomo Catenazzi +E: [EMAIL PROTECTED] +D: Random kernel hack and fixes +D: Author of scripts/cpu_detect.sh +S: Switzerland + N: Gordon Chaffee E: [EMAIL PROTECTED] W: http://bmrc.berkeley.edu/people/chaffee/ diff -urN old.linux/Makefile linux/Makefile --- old.linux/Makefile Fri Dec 29 11:26:55 2000 +++ linux/Makefile Fri Dec 29 13:32:10 2000 @@ -65,6 +65,16 @@ do-it-all: config endif +# Second stage configuration +# Note that GNU make will read again this Makefile, so the CONFIG are +# updated +ifeq ($(CONFIG_CPU_CURRENT), y) +CONFIGURATION = config2 +do-it-all: config2 +.config:config2 + @echo "Rescanning the main Makefile" +endif + # # INSTALL_PATH specifies where to place the updated kernel and system map # images. Uncomment if you want to place them anywhere other than root. @@ -273,6 +283,14 @@ config: symlinks $(CONFIG_SHELL) scripts/Configure arch/$(ARCH)/config.in + +config2: + echo "CONFIG_CPU_CURRENT=n" .config + echo `$(CONFIG_SHELL) $(TOPDIR)/scripts/cpu_detect.sh`=y .config + $(MAKE) oldconfig + +config2.test: + echo "CONFIG_CPU_CURRENT=$(CONFIG_CPU_CURRENT)" include/config/MARKER: scripts/split-include include/linux/autoconf.h scripts/split-include include/linux/autoconf.h include/config diff -urN old.linux/arch/i386/config.in linux/arch/i386/config.in --- old.linux/arch/i386/config.in Fri Dec 29 11:26:55 2000 +++ linux/arch/i386/config.in Fri Dec 29 13:14:58 2000 @@ -26,7 +26,9 @@ mainmenu_option next_comment comment 'Processor type and features' -choice 'Processor family' \ +bool "Optimize for current CPU" CONFIG_CPU_CURRENT +if [ "$CONFIG_CPU_CURRENT" != "y" ]; then + choice 'Processor family' \ "386CONFIG_M386 \ 486CONFIG_M486 \ 586/K5/5x86/6x86/6x86MXCONFIG_M586 \ @@ -41,6 +43,10 @@ Winchip-C6 CONFIG_MWINCHIPC6 \ Winchip-2 CONFIG_MWINCHIP2 \ Winchip-2A/Winchip-3 CONFIG_MWINCHIP3D" Pentium-Pro +else + # First configuration stage: Allow all possible processors deps + define_bool CONFIG_M386 y +fi # # Define implied options from the CPU selection here # diff -urN old.linux/scripts/cpu_detect.sh linux/scripts/cpu_detect.sh --- old.linux/scripts/cpu_detect.sh Thu Jan 1 01:00:00 1970 +++ linux/scripts/cpu_detect.sh Fri Dec 29 17:10:23 2000 @@ -0,0 +1,38 @@ +#! /bin/bash + +# Copyright (C) 2000 Giacomo Catenazzi [EMAIL PROTECTED] +# This is free software, see GNU General Public License 2 for details. + +# This script try to autodetect the CPU. +# On SMP I assume that all processors are of the same type as the first + + +if [ "$ARCH" = "i386" ] ; then + vendor=$(sed -n 's/^vendor_id.*: \([-A-Za-z0-9_]*\).*$/\1/pg' /proc/cpuinfo) + cpu_fam=$(sed -n 's/^cpu family.*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_mod=$(sed -n 's/^model[^a-z]*: \([0-9A-Za-z]*\).*$/\1/pg' /proc/cpuinfo) + cpu_id="$vendor:$cpu_fam:$cpu_mod" + + #echo $cpu_id # for debug + + case $cpu_id in +GenuineIntel:4:*) echo CONFIG_M486 ;; # exists ? +GenuineIntel:5:[0123] ) echo CONFIG_M586TSC ;; +GenuineIntel:5:[48] ) echo CONFIG_M586MMX ;; +GenuineIntel:6:[01356] ) echo CONFIG_M686 ;; +GenuineIntel:6:[789]) echo CONFIG_M686FXSR ;; +GenuineIntel:6:1[1] ) echo CONFIG_M686FXSR ;; +AuthenticAMD:4:*) echo CONFIG_M486 ;; +AuthenticAMD:5:[0123] ) echo CONFIG_M586 ;; +AuthenticAMD:5:[89] ) echo CONFIG_MK6 ;; +AuthenticAMD:5:1[01]) echo CONFIG_MK6 ;; +AuthenticAMD:6:[0124] ) echo CONFIG_MK7 ;; +UMC:4:[12] ) echo CONFIG_M486 ;; # "UMC" ! +NexGenDriven:5:0) echo CONFIG_M386 ;; +TransmetaCPU:* ) echo CONFIG_MCRUSOE ;; +GenuineTMx86:* ) echo CONFIG_MCRUSOE ;; + +# default value +* ) echo CONFIG_M386 ;; + esac +fi - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [KBUILD] How do we handle autoconfiguration?
"Eric S. Raymond" wrote: > > I backed away from this because Giacomo Catenazzi told me he was > working on a separate autoconfigurator that would generate config > files in CML1 format. That's a cleaner design -- one would run his > autoconfigurator and then import the resulting config into the CML2 > configurator as frozen (immutable) symbols. Giacomo, what's the state > of your project? Status: Now I can detect most of modern hardware, and also some software protocols. My autoconfiguration only in few case say N, because is it difficult to say: you don't need this drivers (e.g. the Matrox Millemium include a list of PCI devices, but my card is not included, but anyway, the driver works, because after the check of PCI ID, it do further checks on other video PCI class). I'm happy with the autodetection. I need some other software protocols to detect, but it is difficult to distinguish: "need protocols" and "protocols included in actual kernel (but nobody will use it)". I've difficult to merge with the CML1/2: In CML2-0.8.3 the include frozen flag (-I) is broken, and also the new -W flag is broken, thus no real test. CML2-0.9.0 need Python 2, which is not in the debian unstable distribution, and I had no time to compile myself. CML1: There is to many not answered question (most not important) thus it has little value in use with make oldconfig. (CML2 has a "oldmenuconfig and oldxconfig") Thus I should modify the Configure files, but with CML2 I will also add a NOVICE/NORMAL/EXPERT configuration menu, to include ELF, COFF, IPC, SHM, ... . It is in project since lots of days, but ELM, COFF, ELF64 are processor dependent, and I don't find (yet) a clean method to include this dependence (CML2). giacomo TODO: a correct/complete autodetection of CPU. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [KBUILD] How do we handle autoconfiguration?
"Eric S. Raymond" wrote: > > I wrote: > >Giacomo, what's the state of your project? > > Sigh, I got an address-invalid bounce from Giacomo. Looks like he > may have fallen off the net. I still receive mails! Maybe try [EMAIL PROTECTED] [better administrators, better software :-)] giacomo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [KBUILD] How do we handle autoconfiguration?
"Eric S. Raymond" wrote: I wrote: Giacomo, what's the state of your project? Sigh, I got an address-invalid bounce from Giacomo. Looks like he may have fallen off the net. I still receive mails! Maybe try [EMAIL PROTECTED] [better administrators, better software :-)] giacomo - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [KBUILD] How do we handle autoconfiguration?
"Eric S. Raymond" wrote: I backed away from this because Giacomo Catenazzi told me he was working on a separate autoconfigurator that would generate config files in CML1 format. That's a cleaner design -- one would run his autoconfigurator and then import the resulting config into the CML2 configurator as frozen (immutable) symbols. Giacomo, what's the state of your project? Status: Now I can detect most of modern hardware, and also some software protocols. My autoconfiguration only in few case say N, because is it difficult to say: you don't need this drivers (e.g. the Matrox Millemium include a list of PCI devices, but my card is not included, but anyway, the driver works, because after the check of PCI ID, it do further checks on other video PCI class). I'm happy with the autodetection. I need some other software protocols to detect, but it is difficult to distinguish: "need protocols" and "protocols included in actual kernel (but nobody will use it)". I've difficult to merge with the CML1/2: In CML2-0.8.3 the include frozen flag (-I) is broken, and also the new -W flag is broken, thus no real test. CML2-0.9.0 need Python 2, which is not in the debian unstable distribution, and I had no time to compile myself. CML1: There is to many not answered question (most not important) thus it has little value in use with make oldconfig. (CML2 has a "oldmenuconfig and oldxconfig") Thus I should modify the Configure files, but with CML2 I will also add a NOVICE/NORMAL/EXPERT configuration menu, to include ELF, COFF, IPC, SHM, ... . It is in project since lots of days, but ELM, COFF, ELF64 are processor dependent, and I don't find (yet) a clean method to include this dependence (CML2). giacomo TODO: a correct/complete autodetection of CPU. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/