Re: When the FUD is all around (sniff).
At 01:34 PM 6/26/01 +0100, Alan Cox wrote: >It is common for newspaper staff to be corrupt, same with magazine people. >Sometimes because people generally believe in a cause and are not impartial >(which I've seen both pro and anti Linux btw) and sometimes because >advertising >revenue is a good thing. Alan, never attribute to conspiracy that which is adequately explained by stupidity. You would be surprised how often newspaper and magazine reporters and their editors make gaffs like this with absolutely no intent of malice, or with the sole malice of wanting to write an article that will be "interesting" to a large readership while taking insufficient time to check all the facts. When I worked at a magazine as a staffer, I was amazed when the editor-in-chief, in response to complaints about a columnist who "got it wrong" a lot, said that he kept the columnist on because his mistakes attracted readership. "He gets TONS of letters, and the readers can't wait to see how he screws up next!" Again, not by intent, but by incompetence. For reporters, it's not a matter of "not caring," it's a matter of being required to knock out 15 articles in one day (not uncommon for a reporter on a major metropolitan newspaper) and not having to hand references that are willing to provide answers quickly. Final comment: Know what a well-adjusted paranoid is? "Hey, they ARE out to get you, but it's nothing personal." Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: When the FUD is all around (sniff).
At 02:33 PM 6/26/01 +0200, Alessandro Suardi wrote: >To top this off with complete crap, after mentioning Gracenote: > > "There may be a paradoxical situation: the [Microsoft] appeal judge > may restore the Microsoft monolith that judge Jackson wanted to > break in small pieces. And in the meantime who could end up under > accusation for excessive power and monopoly "temptation" would be > the arch-enemy Torvalds." > > >I have trouble in finding words to describe such blatant ignorance. The Internet Press Guild (http://www.netpress.org) is open to reporters and editors of any country, not just to US/Canada reporters. When you write a letter to the editor in response to articles such as the one you quote in La Repubblica, be sure to mention this fact. The Internet Press Guild was formed by Internet-savvy working reporters in response to the Time magazine CyberPorn article. (They were chased off the newsgroup alt.internet.media-coverage when the SNR dropped like a rock.) It's mission: to provide a place where working press can check stories they are about to run regarding the Internet to avoid egg-on-face syndrome. The IPG contains a few people close to Linux development, so while a story like this is a little beyond the original charter, a quick check would have avoided the gaff. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: When the FUD is all around (sniff).
At 02:33 PM 6/26/01 +0200, Alessandro Suardi wrote: To top this off with complete crap, after mentioning Gracenote: There may be a paradoxical situation: the [Microsoft] appeal judge may restore the Microsoft monolith that judge Jackson wanted to break in small pieces. And in the meantime who could end up under accusation for excessive power and monopoly temptation would be the arch-enemy Torvalds. I have trouble in finding words to describe such blatant ignorance. The Internet Press Guild (http://www.netpress.org) is open to reporters and editors of any country, not just to US/Canada reporters. When you write a letter to the editor in response to articles such as the one you quote in La Repubblica, be sure to mention this fact. The Internet Press Guild was formed by Internet-savvy working reporters in response to the Time magazine CyberPorn article. (They were chased off the newsgroup alt.internet.media-coverage when the SNR dropped like a rock.) It's mission: to provide a place where working press can check stories they are about to run regarding the Internet to avoid egg-on-face syndrome. The IPG contains a few people close to Linux development, so while a story like this is a little beyond the original charter, a quick check would have avoided the gaff. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: When the FUD is all around (sniff).
At 01:34 PM 6/26/01 +0100, Alan Cox wrote: It is common for newspaper staff to be corrupt, same with magazine people. Sometimes because people generally believe in a cause and are not impartial (which I've seen both pro and anti Linux btw) and sometimes because advertising revenue is a good thing. Alan, never attribute to conspiracy that which is adequately explained by stupidity. You would be surprised how often newspaper and magazine reporters and their editors make gaffs like this with absolutely no intent of malice, or with the sole malice of wanting to write an article that will be interesting to a large readership while taking insufficient time to check all the facts. When I worked at a magazine as a staffer, I was amazed when the editor-in-chief, in response to complaints about a columnist who got it wrong a lot, said that he kept the columnist on because his mistakes attracted readership. He gets TONS of letters, and the readers can't wait to see how he screws up next! Again, not by intent, but by incompetence. For reporters, it's not a matter of not caring, it's a matter of being required to knock out 15 articles in one day (not uncommon for a reporter on a major metropolitan newspaper) and not having to hand references that are willing to provide answers quickly. Final comment: Know what a well-adjusted paranoid is? Hey, they ARE out to get you, but it's nothing personal. Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: What are the VM motivations??
At 03:26 PM 6/24/01 -0300, Rik van Riel wrote: >On Sun, 24 Jun 2001, Jason McMullan wrote: > > > Uhh. That's not what I was ranting about. What I was > > ranting about is that we have never 'put to paper' the > > requirements ('motiviations') for a good VM, nor have we > > looked at said nonexistent list and figured out what instrumentation > > would be needed. > >But we have. The fact that you missed the event doesn't >make it any less true. URL, please? If the requirements/motivations were "put on paper" getting them on the Web is not a big deal. If it's *only* on paper, I can give you my fax number and I'll be happy to put it up on the Web. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: What are the VM motivations??
At 03:26 PM 6/24/01 -0300, Rik van Riel wrote: On Sun, 24 Jun 2001, Jason McMullan wrote: Uhh. That's not what I was ranting about. What I was ranting about is that we have never 'put to paper' the requirements ('motiviations') for a good VM, nor have we looked at said nonexistent list and figured out what instrumentation would be needed. But we have. The fact that you missed the event doesn't make it any less true. URL, please? If the requirements/motivations were put on paper getting them on the Web is not a big deal. If it's *only* on paper, I can give you my fax number and I'll be happy to put it up on the Web. Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Threads are processes that share more
At 08:48 PM 6/20/01 +0200, Martin Devera wrote: >BTW is not possible to implement threads as subset of process ? >Like thread list pointed to from task_struct. It'd contain >thread_structs plus another scheduler's data. >The thread could be much smaller than process. > >Probably there is another problem I don't see, I'm just >currious whether can it work like this .. Threads would then run, as a group, at the priority of the process, and then by priority within the process thread group. To be truely useful, threads need to be able to have their run priority divorced from the priority of the spawning process. By the way, I'm surprised no one has mentioned that a synonym for "thread" is "lightweight process". Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Threads FAQ entry incomplete (was: Alan Cox quote?)
At 03:12 PM 6/19/01 -0600, Richard Gooch wrote: >New FAQ entry: http://www.tux.org/lkml/#s7-21 > >Yeah, it's probably a bit harsh :-) It's also incomplete, in my view, to the point of being misleading. Part of the reason I'm reacting so harshly to this FAQ entry is that it flies in the face of my own experience. I do a lot of real-time and near-real-time programming, and find that the process and thread models are essential to the proper and expected operation of my code. The FAQ answer neglects this entire area of applications. For details: I run a process (not threads as yet, but it will happen) for each event I need to react to in near real time. The running priority of these processes are set as high as possible so that when the event occurs the newly-unblocked process is guaranteed to run almost immediately. These processes, when unblocked, do the absolute minimum amount of work to meet the real-time requirements and then block again. If an event requires extended processing to begin but not to complete in real time, the event process sends an appropriate signal to the main process to perform that extended processing. The main problem with the comment in the FAQ entry that "it's easy to write an event callback system" is that the response time may not meet the needs of the application, particularly in computer systems that have multiple near-real-time applications running at the same time. The key design point in this near-real-time model I have described is that it does not violate the rule of thumb described in the FAQ answer, that the Linux kernel is designed to work well with a small number of RUNNING processes/threads. Ideally, a high-priority process should run for much, much less time than the standard time-slice so as to minimize the number of scheduler elements marked as "ready to run." My standard is for a high-priority process driven by an event to run for no more than a millisecond or so, and leave any heavy lifting to the main process to take care of when the main process gets around to it. For true real-time processing, you have to write device drivers to react to external events directly, a programming task that is more burdensome and error-prone. Or, better, use a real RTOS instead of Linux. When the real-time requirement isn't so critical, though, the method I have described allows the whole thing to be done in userland, where it can be debugged far more easily. In summary, writing code to use many processes or threads is NOT lazy programming in all instances, as the FAQ answer implies. Stephen Satchell writing time-critical programs for only a short time -- 30 years. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Threads FAQ entry incomplete (was: Alan Cox quote?)
At 03:12 PM 6/19/01 -0600, Richard Gooch wrote: New FAQ entry: http://www.tux.org/lkml/#s7-21 Yeah, it's probably a bit harsh :-) It's also incomplete, in my view, to the point of being misleading. Part of the reason I'm reacting so harshly to this FAQ entry is that it flies in the face of my own experience. I do a lot of real-time and near-real-time programming, and find that the process and thread models are essential to the proper and expected operation of my code. The FAQ answer neglects this entire area of applications. For details: I run a process (not threads as yet, but it will happen) for each event I need to react to in near real time. The running priority of these processes are set as high as possible so that when the event occurs the newly-unblocked process is guaranteed to run almost immediately. These processes, when unblocked, do the absolute minimum amount of work to meet the real-time requirements and then block again. If an event requires extended processing to begin but not to complete in real time, the event process sends an appropriate signal to the main process to perform that extended processing. The main problem with the comment in the FAQ entry that it's easy to write an event callback system is that the response time may not meet the needs of the application, particularly in computer systems that have multiple near-real-time applications running at the same time. The key design point in this near-real-time model I have described is that it does not violate the rule of thumb described in the FAQ answer, that the Linux kernel is designed to work well with a small number of RUNNING processes/threads. Ideally, a high-priority process should run for much, much less time than the standard time-slice so as to minimize the number of scheduler elements marked as ready to run. My standard is for a high-priority process driven by an event to run for no more than a millisecond or so, and leave any heavy lifting to the main process to take care of when the main process gets around to it. For true real-time processing, you have to write device drivers to react to external events directly, a programming task that is more burdensome and error-prone. Or, better, use a real RTOS instead of Linux. When the real-time requirement isn't so critical, though, the method I have described allows the whole thing to be done in userland, where it can be debugged far more easily. In summary, writing code to use many processes or threads is NOT lazy programming in all instances, as the FAQ answer implies. Stephen Satchell writing time-critical programs for only a short time -- 30 years. - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Threads are processes that share more
At 08:48 PM 6/20/01 +0200, Martin Devera wrote: BTW is not possible to implement threads as subset of process ? Like thread list pointed to from task_struct. It'd contain thread_structs plus another scheduler's data. The thread could be much smaller than process. Probably there is another problem I don't see, I'm just currious whether can it work like this .. Threads would then run, as a group, at the priority of the process, and then by priority within the process thread group. To be truely useful, threads need to be able to have their run priority divorced from the priority of the spawning process. By the way, I'm surprised no one has mentioned that a synonym for thread is lightweight process. Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: obsolete code must die
At 12:24 AM 6/14/01 -0300, Rik van Riel wrote: >Everything you propose to get rid of are DRIVERS. They >do NOT complicate the core kernel, do NOT introduce bugs >in the core kernel and have absolutely NOTHING to do with >how simple or maintainable the core kernel is. Not quite. There were two non-driver suggestions that the man did make: remove 386/486 support and remove floating-point emulation support. Both are bad for the embedded-systems space, because the 486 is still used there widely. Are all the bus support code exclusively in drivers, or is there something compiled into the nucleus for start-up? I didn't see your "don't feed the troll" sign before... Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: obsolete code must die
At 12:24 AM 6/14/01 -0300, Rik van Riel wrote: Everything you propose to get rid of are DRIVERS. They do NOT complicate the core kernel, do NOT introduce bugs in the core kernel and have absolutely NOTHING to do with how simple or maintainable the core kernel is. Not quite. There were two non-driver suggestions that the man did make: remove 386/486 support and remove floating-point emulation support. Both are bad for the embedded-systems space, because the 486 is still used there widely. Are all the bus support code exclusively in drivers, or is there something compiled into the nucleus for start-up? I didn't see your don't feed the troll sign before... Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Single user linux
At 09:03 PM 4/26/01 +0700, you wrote: >right now it's the kernel who thinks that root >is special, and applications work around that because there's a >division of super-user and plain user. is that a must? Short answer: Yes. Long answer: The division is artificial, but is absolutely necessary for administration of a Unix-type system. For example, when the process currently running is not running as a "superuser" process, the process cannot run resources down to absolute zero -- think disk allocation. This means that the administrator (who may be the same person as the "user") has a chance of being able to recover from a runaway process gracefully by being able to go in and kill that process before the whole system lays down and dies. Ever watch what happens when Windows runs out of "swap space" because the swap file can't get any space? Ever try to recover from it? Make damn sure you have the non-upgrade CD around when you try this. Even more important, make sure you have multiple back-ups when you try this. The whole point of "user" and "superuser" is that when the user does something stupid or careless or even malicious, the superuser can bail the system out. You don't usually work in superuser mode, and programs that don't need superuser access don't get it. Humans make mistakes a number of orders of magnitude more often than computers do. The barrier helps minimize the damage. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Single user linux
At 09:03 PM 4/26/01 +0700, you wrote: right now it's the kernel who thinks that root is special, and applications work around that because there's a division of super-user and plain user. is that a must? Short answer: Yes. Long answer: The division is artificial, but is absolutely necessary for administration of a Unix-type system. For example, when the process currently running is not running as a superuser process, the process cannot run resources down to absolute zero -- think disk allocation. This means that the administrator (who may be the same person as the user) has a chance of being able to recover from a runaway process gracefully by being able to go in and kill that process before the whole system lays down and dies. Ever watch what happens when Windows runs out of swap space because the swap file can't get any space? Ever try to recover from it? Make damn sure you have the non-upgrade CD around when you try this. Even more important, make sure you have multiple back-ups when you try this. The whole point of user and superuser is that when the user does something stupid or careless or even malicious, the superuser can bail the system out. You don't usually work in superuser mode, and programs that don't need superuser access don't get it. Humans make mistakes a number of orders of magnitude more often than computers do. The barrier helps minimize the damage. Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Single user linux
"Thinking out of the box," you don't need to modify the kernel or the userland utilities to make Linux automatically launch a dedicated terminal for embedded applications. All you need to do is look at the file /etc/inittab and read the man pages for this file. For console access, you merely make a shell the first program launched, and you specify RESPAWN as the restart type so that if the shell crashes you get your shell back. The invocation may need to be put in a wrapper so that standard input, standard output, and standard error are set properly, as are the environment variables. The security model of Unix need not be sacrificed. The wrapper can set the user ID to a default non-zero user so that there is more security than the all-root solution that others have suggested. For administrative duties, the user would use su (and appropriate password) to acquire the appropriate permissions. Back when Unix was first given out by Bell Labs in the '70s, several Bell people wrote papers describing exactly how to do this sort of thing in Version 7. In the thirty years since the technique was described, the underlying structure -- init/getty/login -- hasn't changed. I suspect that many people here haven't explored the power of inittab, especially given the discussion about dying daemons a few months back and how the problem was solved in the beginning and the solution ignored today. (For those of you interested, you might want to check the archives for the tangent in the OOMkiller discussion.) (Sorry, I've not found those papers on-line, and my copies were lost about seven moves ago.) Satch At 06:44 PM 4/24/01 +0700, [EMAIL PROTECTED] wrote: >hi, > >a friend of my asked me on how to make linux easier to use >for personal/casual win user. > >i found out that one of the big problem with linux and most >other operating system is the multi-user thing. > >i think, no personal computer user should know about what's >an operating system idea of a user. they just want to use >the computer, that's it. > >by a personal computer i mean home pc, notebook, tablet, >pda, and communicator. only one user will use those devices, >or maybe his/her friend/family. do you think that user want >to know about user account? > >from that, i also found out that it is very awkward to type >username and password every time i use my computer. >so here's a patch. i also have removed the user_struct from >my kernel, but i don't think you'd like #ifdef's. >may be it'll be good for midori too. > > > imel > > > >--- sched.h Mon Apr 2 18:57:06 2001 >+++ sched.h~Tue Apr 24 17:32:33 2001 >@@ -655,6 +655,12 @@ >unsigned long, const char *, void *); > extern void free_irq(unsigned int, void *); > >+#ifdef CONFIG_NOUSER >+#define capable(x) 1 >+#define suser()1 >+#define fsuser() 1 >+#else >+ > /* > * This has now become a routine instead of a macro, it sets a flag if > * it returns true (to do BSD-style accounting where the process is flagged >@@ -706,6 +712,8 @@ > } > return 0; > } >+ >+#endif /* CONFIG_NOUSER */ > > /* > * Routines for handling mm_structs > >diff -ur linux/Documentation/Configure.help >nouser/Documentation/Configure.help >--- linux/Documentation/Configure.help Mon Apr 2 18:53:29 2001 >+++ nouser/Documentation/Configure.help Tue Apr 24 18:08:49 2001 >@@ -13626,6 +13626,14 @@ >a work-around for a number of buggy BIOSes. Switch this option on if >your computer crashes instead of powering off properly. > >+Disable Multi-user (DANGEROUS) >+CONFIG_NOUSER >+ Disable kernel multi-user support. Normally, we treat each user >+ differently, depending on his/her permissions. If you _really_ >+ think that you're not going to use your computer in a hostile >+ environment and would like to cut a few bytes, say Y. >+ Most people should say N. >+ > Watchdog Timer Support > CONFIG_WATCHDOG >If you say Y here (and to one of the following options) and create a >diff -ur linux/arch/i386/config.in nouser/arch/i386/config.in >--- linux/arch/i386/config.in Mon Feb 5 18:50:27 2001 >+++ nouser/arch/i386/config.in Tue Apr 24 17:53:42 2001 >@@ -244,6 +244,8 @@ > bool 'Use real mode APM BIOS call to power off' > CONFIG_APM_REAL_MODE_POWER_OFF > fi > >+bool 'Disable Multi-user (DANGEROUS)' CONFIG_NOUSER >+ > endmenu > > source drivers/mtd/Config.in > >- >To unsubscribe from this list: send the line "unsubscribe linux-kernel" in >the body of a message to [EMAIL PROTECTED] >More majordomo info at http://vger.kernel.org/majordomo-info.html >Please read the FAQ at http://www.tux.org/lkml/ - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] pedantic code cleanup - am I wasting my time with this?
At 05:58 PM 4/23/01 +0200, you wrote: >>On Mon, Apr 23, 2001 at 05:26:27PM +0200, Jesper Juhl wrote: >>>last entry should not have a trailing comma. >>Sadly not. This isn't a gcc thing: ANSI says that trailing comma is ok (K >>Second edition, A8.7 - pg 218 &219 in my copy) > >You are right, I just consulted my own copy, and nothing strictly forbids >the comma... Sorry about that, I should have been more thorough before >reporting that one... From the X3J11 Rationale document, paraphrase: The inclusion of optional trailing commas is to ease the task of generating code by automatic programs such as LEX and YACC. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] pedantic code cleanup - am I wasting my time with this?
At 05:58 PM 4/23/01 +0200, you wrote: On Mon, Apr 23, 2001 at 05:26:27PM +0200, Jesper Juhl wrote: last entry should not have a trailing comma. Sadly not. This isn't a gcc thing: ANSI says that trailing comma is ok (KR Second edition, A8.7 - pg 218 219 in my copy) You are right, I just consulted my own copy, and nothing strictly forbids the comma... Sorry about that, I should have been more thorough before reporting that one... From the X3J11 Rationale document, paraphrase: The inclusion of optional trailing commas is to ease the task of generating code by automatic programs such as LEX and YACC. Satch - To unsubscribe from this list: send the line unsubscribe linux-kernel in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: goodbye
At 09:02 PM 4/7/01 -0700, Joseph Carter wrote: >Not always an option. There are many places in the world in which your >ISP is a monopoly. And even in your simplistic view of the world, there >are many places in the United States where you are held captibe by not >having more than one local ISP. That's even more true of broadband >connections. Monopoly service is the rule there, not the exception. Concur. One reason I started up my own sendmail for outgoing mail was because Pacific Bell Internet (in its various brand names) refused to close up open relays, even when their large clients ran spam relay servers. When ORBS caused my mail to be blocked to the Linux Kernel list because of this, I complained to "technical support". Their response was for me to sue ORBS for causing my mails to be blocked! When asked when PBI was going to close up the mail relay from their customers' open servers, they said "never." I had broadband access with them. Fortunately PBI was clueless enough that I could run my own outgoing mail server and get connectivity back. It took nine months before I could move off of them. Other contributors may not be as fortunate -- I've heard about ISPs that block all SMTP traffic not involving their mail servers. When they are the only ISP in town, that makes for a bad situation. I also expect nothing to come from this off-topic discussion, so this will most likely be the last you hear from me on this subject. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: goodbye
At 09:02 PM 4/7/01 -0700, Joseph Carter wrote: Not always an option. There are many places in the world in which your ISP is a monopoly. And even in your simplistic view of the world, there are many places in the United States where you are held captibe by not having more than one local ISP. That's even more true of broadband connections. Monopoly service is the rule there, not the exception. Concur. One reason I started up my own sendmail for outgoing mail was because Pacific Bell Internet (in its various brand names) refused to close up open relays, even when their large clients ran spam relay servers. When ORBS caused my mail to be blocked to the Linux Kernel list because of this, I complained to "technical support". Their response was for me to sue ORBS for causing my mails to be blocked! When asked when PBI was going to close up the mail relay from their customers' open servers, they said "never." I had broadband access with them. Fortunately PBI was clueless enough that I could run my own outgoing mail server and get connectivity back. It took nine months before I could move off of them. Other contributors may not be as fortunate -- I've heard about ISPs that block all SMTP traffic not involving their mail servers. When they are the only ISP in town, that makes for a bad situation. I also expect nothing to come from this off-topic discussion, so this will most likely be the last you hear from me on this subject. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: bug database braindump from the kernel summit
At 10:54 AM 4/1/01 -0700, Larry McVoy wrote: > - one key observation: let bugs "expire" much like news expires. If > nobody has been whining enough that it gets into the high signal > bug db then it probably isn't real. We really want a way where no > activity means let it expire. I have a couple of suggestions that may improve the bug tracking process without needing a bug czar or driving everyone crazy. 1) The idea of letting a bug "expire" is a good one. One way to do this is to incorporate some form of user-base moderation into the bug database. Unlike some of the forum systems, there's no reason why we can't have tiers of moderators: "maintainers" are the clearinghouse people for certain portions of the Linux kernel source tree and should have a larger voice as to whether a bug is important, redundant, or completely off base. "contributors" are people recognized as having contributed in one way or another to the source tree (or, as the bug systems grows to encompass documentation, the documentation tree) and could serve as a "citizens advisory group" to speed the process of sorting the wheat from the chaff. Also, a "contributor" would be able to "take ownership" of the bug. 2) One of the big problems is recognizing that a particular bug has already been reported, and more importantly getting some sort of link between the new bug and the old bug. When I ran a DVT operation, the developers found this linkage to be extremely useful in order to trace the source of bugs, especially really obscure ones that cut across a number of modules. 3) In the commercial software world, there is a requirement that a bug be verified by someone "in house" -- in other words, a bug isn't a bug until someone can reproduce it. This is a key item in separating the noise from the signal. Again, the group-moderation system would permit quick identification of repeatable bugs. 4) Using an NNTP interface would be interesting. "Follow-ups" could consist of observations, commands, and requests for additional information from the bug report that isn't visible from the basic NNTP tree. If you want to see more about a bug, the tree representation could let you pick and choose what you want to look at. For someone who prefers to have everything to hand, a command would say "email sections a, b, ... to me (with "me" defined in the NNTP headers) and those sections would be mailed to the individual. 5) Most important, the person originally submitting the bug should have an easy way of saying "never mind." Existing search commands in the NNTP interface could make this a very easy chore for the infrequent contributor. EXPIRING: It's one thing to do an expire a la standard NNTP conventions, but it's quite another to do something "smarter". I see a couple of things that would have to enter into a decision whether to expire a bug from the pending-status list: a) The bug needs to be present for more than a set amount of time without overt activity. b) A person trying to replicate the bug should be able to extend the time-out -- some bugs take longer to replicate than others. If you don't allow for this, the bug could be expired before it can be verified, and the verifier has to work harder (assuming they even bother) to extract the bug from the data mine and get it to where a code guy can get to it. c) A maintainer should be able to sink a bogus bug early, especially if no one has owned up to trying to replicate it or fix it. Contributors can heartily declare a bug "bogus", and if enough do so the bug could be sunk early. Also, if enough people say "I can't replicate this bug" that's a good sign you have a piece of noise. From my own experience in commercial shops, I'd say that we could start with an expire time of two weeks, and if necessary adjust it. Weighting for each of the metrics for expiring bugs could be set experimentally. The goal is that a maintainer can squash bugs NOW, and the community could actively squash bugs in 24 hours. IS THE BUG FIXED: When a bug is declared "fixed" the bug tracking system needs to alert everyone who has submitted the bug and replicated it. This notification would then let those people (those who are still interested) see if the patch really fixes the bug. If it does, confirmation of a bug fix would be included, and that would help Alan & Co. to determine what patches should go in. Just a few random thoughts on the whole process -- but I suspect others have already thought of these things. I'd be interested in working on this, day job willing. Stephen Satchell [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: bug database braindump from the kernel summit
At 10:54 AM 4/1/01 -0700, Larry McVoy wrote: - one key observation: let bugs "expire" much like news expires. If nobody has been whining enough that it gets into the high signal bug db then it probably isn't real. We really want a way where no activity means let it expire. I have a couple of suggestions that may improve the bug tracking process without needing a bug czar or driving everyone crazy. 1) The idea of letting a bug "expire" is a good one. One way to do this is to incorporate some form of user-base moderation into the bug database. Unlike some of the forum systems, there's no reason why we can't have tiers of moderators: "maintainers" are the clearinghouse people for certain portions of the Linux kernel source tree and should have a larger voice as to whether a bug is important, redundant, or completely off base. "contributors" are people recognized as having contributed in one way or another to the source tree (or, as the bug systems grows to encompass documentation, the documentation tree) and could serve as a "citizens advisory group" to speed the process of sorting the wheat from the chaff. Also, a "contributor" would be able to "take ownership" of the bug. 2) One of the big problems is recognizing that a particular bug has already been reported, and more importantly getting some sort of link between the new bug and the old bug. When I ran a DVT operation, the developers found this linkage to be extremely useful in order to trace the source of bugs, especially really obscure ones that cut across a number of modules. 3) In the commercial software world, there is a requirement that a bug be verified by someone "in house" -- in other words, a bug isn't a bug until someone can reproduce it. This is a key item in separating the noise from the signal. Again, the group-moderation system would permit quick identification of repeatable bugs. 4) Using an NNTP interface would be interesting. "Follow-ups" could consist of observations, commands, and requests for additional information from the bug report that isn't visible from the basic NNTP tree. If you want to see more about a bug, the tree representation could let you pick and choose what you want to look at. For someone who prefers to have everything to hand, a command would say "email sections a, b, ... to me (with "me" defined in the NNTP headers) and those sections would be mailed to the individual. 5) Most important, the person originally submitting the bug should have an easy way of saying "never mind." Existing search commands in the NNTP interface could make this a very easy chore for the infrequent contributor. EXPIRING: It's one thing to do an expire a la standard NNTP conventions, but it's quite another to do something "smarter". I see a couple of things that would have to enter into a decision whether to expire a bug from the pending-status list: a) The bug needs to be present for more than a set amount of time without overt activity. b) A person trying to replicate the bug should be able to extend the time-out -- some bugs take longer to replicate than others. If you don't allow for this, the bug could be expired before it can be verified, and the verifier has to work harder (assuming they even bother) to extract the bug from the data mine and get it to where a code guy can get to it. c) A maintainer should be able to sink a bogus bug early, especially if no one has owned up to trying to replicate it or fix it. Contributors can heartily declare a bug "bogus", and if enough do so the bug could be sunk early. Also, if enough people say "I can't replicate this bug" that's a good sign you have a piece of noise. From my own experience in commercial shops, I'd say that we could start with an expire time of two weeks, and if necessary adjust it. Weighting for each of the metrics for expiring bugs could be set experimentally. The goal is that a maintainer can squash bugs NOW, and the community could actively squash bugs in 24 hours. IS THE BUG FIXED: When a bug is declared "fixed" the bug tracking system needs to alert everyone who has submitted the bug and replicated it. This notification would then let those people (those who are still interested) see if the patch really fixes the bug. If it does, confirmation of a bug fix would be included, and that would help Alan Co. to determine what patches should go in. Just a few random thoughts on the whole process -- but I suspect others have already thought of these things. I'd be interested in working on this, day job willing. Stephen Satchell [EMAIL PROTECTED] - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: OOM killer???
In our resource monitors, we were expected to keep track of allocations, deallocations, and actual usage such that we could CONTROL the overcommittment of resources, and therefore avoid deadlock. From my reading of the threads and a glance at the source, the problem is that processes can ask for and receive resources virtually without limit...as long as nobody actually uses those resources. It's only when the processes try to use the resources that the system has promised the processes it can use that the problem rears its ugly head. Not only should there be beancounting, but there needs to be policy input to the kernel when to fail malloc() et. al. If I want to avoid all overcommittment, I should be able to set a value in a file in the /proc filesystem to zero to say "0 percent overcommittment" -- which means fail malloc() calls when you reach a calculated high-water mark. Higher values stored in this file means higher levels of overcommittment is allowed in memory allocation calls. The default at boot would be zero; the distributions could then decide how to set the overcommittment value in the start-up scripts. The userland policy process could even tweak this overcommittment value on the fly if so desired, to tune the system to current demand and to the admin's inputs. This helps separate the prevention measure (failing mallocs()) from the recovery measure (killing processes). I see no way that the beancounting can be relegated to a userland process -- it needs to be in the kernel. To avoid excess bloat in the kernel, the kernel should only count the beans and trigger the userland process when thresholds are exceeded by the system. In this manner, no OOM killer code need be in the kernel at all. No OOM killer registered? We then revert to Version 7 action: panic. Which leads to my final point: I believe that the SIGDANGER signal should be defined in Linux. The signal would not be raised by the kernel in any way -- that's left to the userland OOM daemon. The response to SIGDANGER would be described. The default action would be to ignore SIGDANGER. One comment is that a denial of service could be launched by a process defining a SIGDANGER handler that would call malloc() -- I've already mentioned the requirement that the userland daemon have a way of causing all calls to malloc() to fail. The Linux definition would differ from the AIX definition, but the net result would be the same and I believe that the Linux definition can be written such that existing AIX-based handlers will work with minimum modification. I submit this as a strawman suggestion -- I'm not married to any of the ideas. Feel free to suggest alternatives that solve the problem. Stephen Satchell - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: OOM killer???
eep track of allocations, deallocations, and actual usage such that we could CONTROL the overcommittment of resources, and therefore avoid deadlock. From my reading of the threads and a glance at the source, the problem is that processes can ask for and receive resources virtually without limit...as long as nobody actually uses those resources. It's only when the processes try to use the resources that the system has promised the processes it can use that the problem rears its ugly head. Not only should there be beancounting, but there needs to be policy input to the kernel when to fail malloc() et. al. If I want to avoid all overcommittment, I should be able to set a value in a file in the /proc filesystem to zero to say "0 percent overcommittment" -- which means fail malloc() calls when you reach a calculated high-water mark. Higher values stored in this file means higher levels of overcommittment is allowed in memory allocation calls. The default at boot would be zero; the distributions could then decide how to set the overcommittment value in the start-up scripts. The userland policy process could even tweak this overcommittment value on the fly if so desired, to tune the system to current demand and to the admin's inputs. This helps separate the prevention measure (failing mallocs()) from the recovery measure (killing processes). I see no way that the beancounting can be relegated to a userland process -- it needs to be in the kernel. To avoid excess bloat in the kernel, the kernel should only count the beans and trigger the userland process when thresholds are exceeded by the system. In this manner, no OOM killer code need be in the kernel at all. No OOM killer registered? We then revert to Version 7 action: panic. Which leads to my final point: I believe that the SIGDANGER signal should be defined in Linux. The signal would not be raised by the kernel in any way -- that's left to the userland OOM daemon. The response to SIGDANGER would be described. The default action would be to ignore SIGDANGER. One comment is that a denial of service could be launched by a process defining a SIGDANGER handler that would call malloc() -- I've already mentioned the requirement that the userland daemon have a way of causing all calls to malloc() to fail. The Linux definition would differ from the AIX definition, but the net result would be the same and I believe that the Linux definition can be written such that existing AIX-based handlers will work with minimum modification. I submit this as a strawman suggestion -- I'm not married to any of the ideas. Feel free to suggest alternatives that solve the problem. Stephen Satchell - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Worm (fwd)
At 10:24 AM 3/26/01 -0500, you wrote: >It's sad that people like the one who sent out messages like that can stay >employed. In the last year there have been several Windows love-bug type >worms each causing damaged estimated in the billions. One or two Linux worms >that go after a long fixed problem with no published accounts of significant >damage and you get that sort of email.. What is even sadder is that, for loser companies like the one cited, there is a series of Linux certification programs (not distribution-dependent) under development at CompTIA (the Computing Technology Industry Association). Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: Linux Worm (fwd)
At 10:24 AM 3/26/01 -0500, you wrote: It's sad that people like the one who sent out messages like that can stay employed. In the last year there have been several Windows love-bug type worms each causing damaged estimated in the billions. One or two Linux worms that go after a long fixed problem with no published accounts of significant damage and you get that sort of email.. What is even sadder is that, for loser companies like the one cited, there is a series of Linux certification programs (not distribution-dependent) under development at CompTIA (the Computing Technology Industry Association). Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: mouse problems in 2.4.2
I'm experiencing the same thing with an ASUS P5A-B running a K6-2/266 with Linux 2.2.17, and Windows 98. Central problem appears to be the KVM switch I'm using. Save yourself the problem. I had to reboot all the systems to get regular mouse operation back with the Logitech. Satch At 04:33 PM 3/25/01 -0600, you wrote: >I am experiencing debilitating intermittent mouse problems & was about >to dive into the kernel to see if I could debug it. But first, I thought >a quick note to the mailing list may help. > >Symptoms: >After a long time of flawless operation (ranging from nearly a week to >as little as five minutes), the X11 pointer flies up to top-right corner, >and mostly wants to stay there. Moving the mouse causes a cascade of >spurious button-press events get generated. > >This did not occur with 2.4.0test2 or 2.2.16 (to the best of my >recollection) and first showed up in 2.4.0test7 or 2.4.1 (not sure). >With 2.4.2, the symptoms seem slightly different (almost all pointer >movement events seem to be lost; although spurious button-press events >still happen). > >Mouse is a logitech trackman marble, with USB connector to >logitech-supplied USB to ps/2 DIN plug. Configured as a PS/2 mouse. >Motherboard is a Athalon/VIA Apollo KA7. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Prevent OOM from killing init
At 05:30 PM 3/25/01 +0200, you wrote: > > Ultra reliable systems dont contain memory allocators. There are good > reasons > > for this but the design trade offs are rather hard to make in a real world > > environment > >I esp. they run on CPU's without a stack or what? No dynamic memory allocation AT ALL. That includes the prohibition of a stack. I've seen avionics-loop systems that abstract a stack but the "allocators" are part of the application and are designed to fall over gracefully when they become full -- but getting this past a project manager is hard, as it should be. Then there are those systems with rather interesting watchdog timers. If you don't tickle them just right, they fire and force a restart. The nastiest of these required that you send four specific values to a specific I/O port, and the hardware looked to see if the values violated certain timing guidelines. If you sent the code too early or too late, or if the value in the sequence was incorrect, BAM. The hardware was designed by a guy with some rather interesting experiences with software "engineers" dealing with watchdog timers... Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Prevent OOM from killing init
At 05:30 PM 3/25/01 +0200, you wrote: Ultra reliable systems dont contain memory allocators. There are good reasons for this but the design trade offs are rather hard to make in a real world environment I esp. they run on CPU's without a stack or what? No dynamic memory allocation AT ALL. That includes the prohibition of a stack. I've seen avionics-loop systems that abstract a stack but the "allocators" are part of the application and are designed to fall over gracefully when they become full -- but getting this past a project manager is hard, as it should be. Then there are those systems with rather interesting watchdog timers. If you don't tickle them just right, they fire and force a restart. The nastiest of these required that you send four specific values to a specific I/O port, and the hardware looked to see if the values violated certain timing guidelines. If you sent the code too early or too late, or if the value in the sequence was incorrect, BAM. The hardware was designed by a guy with some rather interesting experiences with software "engineers" dealing with watchdog timers... Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: mouse problems in 2.4.2
I'm experiencing the same thing with an ASUS P5A-B running a K6-2/266 with Linux 2.2.17, and Windows 98. Central problem appears to be the KVM switch I'm using. Save yourself the problem. I had to reboot all the systems to get regular mouse operation back with the Logitech. Satch At 04:33 PM 3/25/01 -0600, you wrote: I am experiencing debilitating intermittent mouse problems was about to dive into the kernel to see if I could debug it. But first, I thought a quick note to the mailing list may help. Symptoms: After a long time of flawless operation (ranging from nearly a week to as little as five minutes), the X11 pointer flies up to top-right corner, and mostly wants to stay there. Moving the mouse causes a cascade of spurious button-press events get generated. This did not occur with 2.4.0test2 or 2.2.16 (to the best of my recollection) and first showed up in 2.4.0test7 or 2.4.1 (not sure). With 2.4.2, the symptoms seem slightly different (almost all pointer movement events seem to be lost; although spurious button-press events still happen). Mouse is a logitech trackman marble, with USB connector to logitech-supplied USB to ps/2 DIN plug. Configured as a PS/2 mouse. Motherboard is a Athalon/VIA Apollo KA7. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Prevent OOM from killing init
At 12:41 AM 3/25/01 +0100, you wrote: >If your box is running for example a mail server, and it appears that >another process is juste eating the free memory, do you really want to kill >the mail server, just because it's the main process and consuming more >memory and CPU than others? > >Well, fine, your OS is up, but your application is not here anymore. If you have a mission-critical application running on your box, add it to the inittab file with the RESPAWN attribute. That way, OOM killer kills it, init notices it, and init restarts your server. By the way, are the people working on the OOM-killer also working to avoid killing task 1? Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] Prevent OOM from killing init
At 12:41 AM 3/25/01 +0100, you wrote: If your box is running for example a mail server, and it appears that another process is juste eating the free memory, do you really want to kill the mail server, just because it's the main process and consuming more memory and CPU than others? Well, fine, your OS is up, but your application is not here anymore. If you have a mission-critical application running on your box, add it to the inittab file with the RESPAWN attribute. That way, OOM killer kills it, init notices it, and init restarts your server. By the way, are the people working on the OOM-killer also working to avoid killing task 1? Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] gcc-3.0 warnings
At 04:31 PM 3/23/01 -0800, you wrote: >This has nothing to do with fastpathing and object code optimization. C >doesn't have exception handling, so you either have to remember to undo >allocations etc. in failure cases all through the code, or you stick your >undo code at the end of the function and have all failure cases jump to the >relevant label. It's not pretty, but it's much less error-prone e.g. Really? I have a "cleanup" function that can be called during failure cases (and success cases -- but you didn't mention that) so that the cost is very low and I don't have to code ANY labels. But then again, I'm a double-pipe abuser, in that I tend to code "atomic" sequences as if ((a) || (b) || (c) || (d) || (e) || (f) || (g) || ... ) { something failed} else {it all worked!} and make sure that the failure value is non-zero for each a, b, c, d, and so forth. I remember looking at the generated code from one compiler for x86 and seeing a series of short jumps to short jumps to short jumps... to the failure case, which in that particular sequence saved about 100 bytes. I haven't looked at GCC output yet to see what it does, but working in a 32-bit system instead of a 16-bit system I tend to care a little less about "efficiency". Does that mean that I avoid "goto"? No. Like every other construct in the C language, there is a valid and appropriate use for every single thing. The key is recognizing when the goto is appropriate. Another thing you will see in my code is resource pointers being initialized to zero on entry, set to non-zero values as resources are allocated, and then conditionally released based on whether the value is zero or non-zero. It makes recovery from malloc failures easier, for one thing. Satch. the || Abuser. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] Prevent OOM from killing init
At 10:28 AM 3/23/01 +0100, you wrote: >Ehrm, I would like to re-state that it still would be nice if >some mechanism got introduced which enables one to set certain >processes to "cannot be killed". >For example: I would hate it it the UPS monitoring daemon got >killed for obvious reasons :o) Hey, my new flame-proof suit arrived today, so let me give it a try-out... 1) If you have a daemon that absolutely positively has to be there, why not put the damn thing in "inittab" with the RESPAWN attribute? OOM kills it, init notices it, init respawns it, you have your UPS monitoring daemon back. 2) Why is task #1 (init) considered at all by the OOM task-killer code? Sounds like a possible off-by-one bug to me. 3) If random task-killing is such a problem, one solution is to add yet another word to the process table entry, something on the order of "oom_importance". Off the top of my head, this 16-bit value would be 0x4000 for "normal" processes, and would be the value at start-up. A value of 0x would be the "never-kill" value, while the value of 0x would be the equivalent of the guy who ALWAYS gives up his airplane seat. The process could set this value between 0x and 0xBFFF for processes running without root privs, the full range for root processes. The big advantage here is that a daemon or major system can set the value to zero during start-up (to ensure being killed if there aren't enough system resources) and then boost the immunity once it is going strong. I can see this being of particular value in windows desktops where an attempt to start a widget causes an out-of memory condition and THAT WIDGET is the one that then dies. That would be the expected behavior. From a debug perspective, it means that the programmer can avoid killing something on his development system "by accident" by attracting all the task-killing lightning during initial debug. This would be a sure-fire improvement over accidentally killing your debugger, for example. I call it "nice for memory". Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [PATCH] Prevent OOM from killing init
At 10:28 AM 3/23/01 +0100, you wrote: Ehrm, I would like to re-state that it still would be nice if some mechanism got introduced which enables one to set certain processes to "cannot be killed". For example: I would hate it it the UPS monitoring daemon got killed for obvious reasons :o) Hey, my new flame-proof suit arrived today, so let me give it a try-out... 1) If you have a daemon that absolutely positively has to be there, why not put the damn thing in "inittab" with the RESPAWN attribute? OOM kills it, init notices it, init respawns it, you have your UPS monitoring daemon back. 2) Why is task #1 (init) considered at all by the OOM task-killer code? Sounds like a possible off-by-one bug to me. 3) If random task-killing is such a problem, one solution is to add yet another word to the process table entry, something on the order of "oom_importance". Off the top of my head, this 16-bit value would be 0x4000 for "normal" processes, and would be the value at start-up. A value of 0x would be the "never-kill" value, while the value of 0x would be the equivalent of the guy who ALWAYS gives up his airplane seat. The process could set this value between 0x and 0xBFFF for processes running without root privs, the full range for root processes. The big advantage here is that a daemon or major system can set the value to zero during start-up (to ensure being killed if there aren't enough system resources) and then boost the immunity once it is going strong. I can see this being of particular value in windows desktops where an attempt to start a widget causes an out-of memory condition and THAT WIDGET is the one that then dies. That would be the expected behavior. From a debug perspective, it means that the programmer can avoid killing something on his development system "by accident" by attracting all the task-killing lightning during initial debug. This would be a sure-fire improvement over accidentally killing your debugger, for example. I call it "nice for memory". Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
Re: [PATCH] gcc-3.0 warnings
At 04:31 PM 3/23/01 -0800, you wrote: This has nothing to do with fastpathing and object code optimization. C doesn't have exception handling, so you either have to remember to undo allocations etc. in failure cases all through the code, or you stick your undo code at the end of the function and have all failure cases jump to the relevant label. It's not pretty, but it's much less error-prone e.g. Really? I have a "cleanup" function that can be called during failure cases (and success cases -- but you didn't mention that) so that the cost is very low and I don't have to code ANY labels. But then again, I'm a double-pipe abuser, in that I tend to code "atomic" sequences as if ((a) || (b) || (c) || (d) || (e) || (f) || (g) || ... ) { something failed} else {it all worked!} and make sure that the failure value is non-zero for each a, b, c, d, and so forth. I remember looking at the generated code from one compiler for x86 and seeing a series of short jumps to short jumps to short jumps... to the failure case, which in that particular sequence saved about 100 bytes. I haven't looked at GCC output yet to see what it does, but working in a 32-bit system instead of a 16-bit system I tend to care a little less about "efficiency". Does that mean that I avoid "goto"? No. Like every other construct in the C language, there is a valid and appropriate use for every single thing. The key is recognizing when the goto is appropriate. Another thing you will see in my code is resource pointers being initialized to zero on entry, set to non-zero values as resources are allocated, and then conditionally released based on whether the value is zero or non-zero. It makes recovery from malloc failures easier, for one thing. Satch. the || Abuser. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Linux should better cope with power failure
At 01:16 PM 3/19/01 -0800, Torrey Hoffman wrote: >Yes. Some of this is your responsibility. You have several options: >1. Get a UPS. That would not have helped your particular problem, >but it's a good idea if you care about data integrity. >2. Use a journaling file system. These are much more tolerant of >abuse. Reiserfs seems to work for me on embedded systems I am >building where the user can (and does) remove the power any time. >3. Use RAID. Hard drives are very cheap and software raid is very >easy to set up. Sorry, but you really should have read the ENTIRE thread before commenting. This guy's original complaint was that his USB keyboard locks up, and the only way to get it back is to do a very rude restart. In combatting this problem, the guy was observing the "shortcomings" of the file system. To be more to the point of the guy's problem, he should consider using software specifically intended for UPS hardware to notify a system when the power is going to go, and wire up an appropriate switch to signal his system to enter shutdown when his keyboard goes south. By forcing an orderly shutdown, he doesn't see the fsck-ing messages, he gets his USB keyboard back, and all is well with the world. Of course, the other option is to use a regular keyboard instead of a USB keyboard, but why point out the really easy solution? "Hey Doc, it hurts when I do this." "Then don't do it." Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: Linux should better cope with power failure
At 01:16 PM 3/19/01 -0800, Torrey Hoffman wrote: Yes. Some of this is your responsibility. You have several options: 1. Get a UPS. That would not have helped your particular problem, but it's a good idea if you care about data integrity. 2. Use a journaling file system. These are much more tolerant of abuse. Reiserfs seems to work for me on embedded systems I am building where the user can (and does) remove the power any time. 3. Use RAID. Hard drives are very cheap and software raid is very easy to set up. Sorry, but you really should have read the ENTIRE thread before commenting. This guy's original complaint was that his USB keyboard locks up, and the only way to get it back is to do a very rude restart. In combatting this problem, the guy was observing the "shortcomings" of the file system. To be more to the point of the guy's problem, he should consider using software specifically intended for UPS hardware to notify a system when the power is going to go, and wire up an appropriate switch to signal his system to enter shutdown when his keyboard goes south. By forcing an orderly shutdown, he doesn't see the fsck-ing messages, he gets his USB keyboard back, and all is well with the world. Of course, the other option is to use a regular keyboard instead of a USB keyboard, but why point out the really easy solution? "Hey Doc, it hurts when I do this." "Then don't do it." Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] More majordomo info at http://vger.kernel.org/majordomo-info.html Please read the FAQ at http://www.tux.org/lkml/
RE: [OT?] Coding Style
At 10:47 AM 1/23/01 -0600, Jesse Pollard wrote: >Code is written by the few. >Code is read by the many, and having _ in there makes it MUCH easier to >read. Visual comparison of "SomeFunctionName" and "some_function_name" >is faster even for a coder where there may be a typo (try dropping a >character) >or mis identifing two different symbols with similar names: > > d_hash_mask > d_hash_shift > >This is relatively easy to read. conversely: > > DHashMask > DHashShift > >Are more difficult to spot. Depends on what you are used to. I'm used to both, being both an old-world C programmer from the very beginning (where underscore was the preferred way) and also a Pascal programmer (where the mixed-case form was the preferred way). Remember a language where dollar signs broke up words? But then again, one reason I'm so fond of structures is that you can get away from the whole thing by being able to read d.hash.mask d.hash.shift (It's really too bad that you can't have structured enum constants, isn't it?) By the way, just so everyone hates me, I would tend to key the above two names as DHash_mask DHash_shift so that, as another person has commented, you identify the class of a variable and the specifics as easily identifiable entities. That assumes that your "class" names are sufficiently different that a mis-key will be caught by that master of book-keeping, the compiler. - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT?] Coding Style
At 08:28 PM 1/23/01 +0800, Steve Underwood wrote: >During a period of making a liveing out of >sorting out severly screwed up projects I made a little comment >stripper. I found comments so unreliable, and so seldom useful, I was >better off reading the code without the confusion they might cause. I >do, however, try to document the non-obvious through comments in what I >write. Ditto. Mine had the option of leaving the block comments in place (line count was a parameter) because the block comments proved to be more useful than the in-line comments. >Some people still seem to be living in the age of K C, with 6 or 7 >character variable names that demand some explanation. Maybe some day >they will awake to the expressive power of long (and well chosen) names. Actually, they are still living as though the KSR-33 and ASR-33 teletypes were the only input device. :) True story: I was retained to solve a particular problem for a company over a one-year time period. I wrote some rather nifty code to solve the problem, and was happily doing data extraction and conversion for that time period. Then there was a management turnover at the client and the new guy decided to implement a cost-cutting measure: cut out as many outsiders as possible. He decided that I should give him the code I had developed over the year (that wasn't part of the contract, of course) so that he could have in-house people do it. Not just executable programs, of course. "We bought the development of that code, so we deserve the source." The bastard backed up the demand with his lawyer. Not wanting to spend the money on the threatened lawsuit, I gave him exactly what he asked for: the source to the latest working version of the programs I wrote to do the job. It took a while to prepare the source for this jerk. Here is what I did to the source I gave the guy: 1) Used the output of CPP, which stripped out all include files and strips all comments. This had the interesting side effect of making the source compiler-dependent. 2) Stripped all newlines, and converted strings of spaces and tabs not in quotes to a single space. This made the source one line long...a VERY LONG line. 3) Converted each reasonable variable name to a string of seven random characters from the set [A-Aa-z0-9_], with the first character restricted to [A-Za-z]. A list of #define statements equated the random name to the proper library name or symbol. (Because the names included a number of internal variables in the compiler library, this was a HUGE list.) The resulting symbol table was so large that I had to use disk to keep all the names. Inadvertently I had also randomized lexical elements such as "for", "while" and so forth, but #define statements took care of that problem. 4) Determined that the output of the compiler with the mangled source was exactly the same as the output of the compiler with the original source. As you can guess, I discovered a few bugs with the compiler I was using. The compiler writer (who was just down the road) was highly amused with this and asked if they could "borrow" my mangler "for test purposes." (Just who do they think they were kiddin'?) Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT?] Coding Style
At 08:28 PM 1/23/01 +0800, Steve Underwood wrote: During a period of making a liveing out of sorting out severly screwed up projects I made a little comment stripper. I found comments so unreliable, and so seldom useful, I was better off reading the code without the confusion they might cause. I do, however, try to document the non-obvious through comments in what I write. Ditto. Mine had the option of leaving the block comments in place (line count was a parameter) because the block comments proved to be more useful than the in-line comments. Some people still seem to be living in the age of KR C, with 6 or 7 character variable names that demand some explanation. Maybe some day they will awake to the expressive power of long (and well chosen) names. Actually, they are still living as though the KSR-33 and ASR-33 teletypes were the only input device. :) True story: I was retained to solve a particular problem for a company over a one-year time period. I wrote some rather nifty code to solve the problem, and was happily doing data extraction and conversion for that time period. Then there was a management turnover at the client and the new guy decided to implement a cost-cutting measure: cut out as many outsiders as possible. He decided that I should give him the code I had developed over the year (that wasn't part of the contract, of course) so that he could have in-house people do it. Not just executable programs, of course. "We bought the development of that code, so we deserve the source." The bastard backed up the demand with his lawyer. Not wanting to spend the money on the threatened lawsuit, I gave him exactly what he asked for: the source to the latest working version of the programs I wrote to do the job. It took a while to prepare the source for this jerk. Here is what I did to the source I gave the guy: 1) Used the output of CPP, which stripped out all include files and strips all comments. This had the interesting side effect of making the source compiler-dependent. 2) Stripped all newlines, and converted strings of spaces and tabs not in quotes to a single space. This made the source one line long...a VERY LONG line. 3) Converted each reasonable variable name to a string of seven random characters from the set [A-Aa-z0-9_], with the first character restricted to [A-Za-z]. A list of #define statements equated the random name to the proper library name or symbol. (Because the names included a number of internal variables in the compiler library, this was a HUGE list.) The resulting symbol table was so large that I had to use disk to keep all the names. Inadvertently I had also randomized lexical elements such as "for", "while" and so forth, but #define statements took care of that problem. 4) Determined that the output of the compiler with the mangled source was exactly the same as the output of the compiler with the original source. As you can guess, I discovered a few bugs with the compiler I was using. The compiler writer (who was just down the road) was highly amused with this and asked if they could "borrow" my mangler "for test purposes." (Just who do they think they were kiddin'?) Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT?] Coding Style
At 11:56 PM 1/22/01 +, Anton Altaparmakov wrote: >At 16:42 22/01/2001, Mark I Manning IV wrote: >>Stephen Satchell wrote: >> > I got in the habit of using >> > structures to minimize the number of symbols I exposed. It also >> > disambiguates local variables and parameters from file- and program-global >> > variables. >> >>explain this one to me, i think it might be usefull... > >What might be meant is that instead of declaring variables my_module_var1, >my_module_var2, my_module_var3, etc. you declare a struct my_module { >var1; var2; var3; etc. }. Obviously in glorious technicolour formatting... (-; >That's my interpretation anyway... The first sentence is right on the money. In addition to module variables, I define a global structure as: extern struct G { /* the real globals */ } g; and then in the main program I define the instance as "struct G g;" This is more for apps than operating systems. Further to the avoidance of pollution of the external global namespace, I define local functions as static. Indeed, in one parser I had over 1400 very small functions, none of them with external scope. Instead, I defined a structure of function pointers and exposed one name to the rest of the world. Sound stupid? Well, that stupidity had its place: the "opcode" in the pseudo-instruction stream was the offset into this structure of pointers to the pointer of interest, which made the main loop for the parser about five lines long, and not a switch statement to be seen. Three of those lines were to handle unknown-opcodes... I also am partial to arrays of function pointers when appropriate. Ever think how easy it would be to implement a TCP stack that would handle the "lamp-test packet" as a single special case? Granted, it results in a small amount of code bloat over the traditional in-line test method, but it does make you think about EVERY SINGLE ONE OF THE 64 COMBINATIONS of Urg/Ack/Psh/Rst/Syn/Fin (to use the labels from the 1985 version of RFC 793) and what they really mean. Especially the combination with all bits set. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
RE: [OT?] Coding Style
At 11:04 AM 1/22/01 -0500, you wrote: >WRONG!!! > >Not documenting your code is not a sign of good coding, but rather shows >arrogance, laziness and contempt for "those who would dare tamper with your >code after you've written it". Document and comment your code thoroughly. >Do it as you go along. I was also taught to comment nearly every line - as >part of the coding style used by a large, international company I worked for >several years ago. It brings the logic of the programmer into focus and >makes code maintenance a whole lot easier. It also helps one to remember >the logic of your own code when you revisit it a year or more hence. Oh, those who refuse to study history are doomed to repeat it. The COBOL language was created specifically to reduce the amount of commenting necessary in a program, because the English-like sentence structure could be read and understood by humans. The FORTRAN language was created so that math types could "talk" in a language more familiar to them, letting the computer take care of the details about how to perform the specified task step-by-step. One goal of language designers is to REMOVE the need for comments. With a good fourth-generation or fifth-generation language, the need for comments diminishes to a detailed description of the data sets and any highly unusual operations or transforms on the data. I've even gone so far as to "invent" my own languages, and the parsers to go with them, to reduce the need to comment by making the code easy for humans to read. Not only are such systems easier to debug (with good language design) but are highly maintainable and usually not all that difficult to extend when necessary. Remember, the line-by-line commenting requirement was mandatory in assembler programming, because the nature of assembler made you outline each step by tiring step. When I worked for Rockwell, I was granted a partial wavier when I showed them my assembler-language commenting style: pseudo-code at the top of each block of assembler code. Blindly applying the rule to second-generation and later languages is just sloppy management, usually by people who don't understand coding. (And yes, that includes some professors of computer science I have known.) Comments do NOT make code maintenance easier. Too many comments obscure what is really going on. Linus' style actually increases the maintainability of the code, because if the code doesn't accurately show how it implements the goal specified in the block comment, the coder hasn't done his/her job. Want to improve the maintainability of C code? Consider the following: 1) Keep functional parts small. If the code won't fit in a hundred lines or so of code, then you haven't factored the problem well enough. Functional parts != functions. A program with thousands of well-encapsulated function parts strung together into a single function is easier to maintain than a "well-factored" program with its parts spread all over hell. Diagnostic programmers have learned the hard way that factoring a program can make it difficult to ensure test coverage and even more difficult to determine if a part of the code is buggy or whether it found a hardware error that it was looking for. This is why diagnostics tend to be rather long affairs with very few functions. In my ANSI C code, you will see the following a lot: #define DO /*syntactic sugar */ DO { http://www.tux.org/lkml/
RE: [OT?] Coding Style
out a performance hit. (For understandability, I recommend strongly a considered use of symbolic bitmasks, however.) 3) Make creative use of a run-on if statement to improve error detection and recording. One of my tricks is to code the following statement in application programs: if( (err = "input file can't be opened", in = fopen(filename, "rb"), in == NULL) || (err = "output file can't be opened", out = fopen(oname, "wb"), out = NULL) ... ) { /* report the error that occurred, using the char * variable "err" to indicate the exact error. */ } This means that I don't have to explicitly remember to code error routines for each and every function call, but the error detection coverage is much, much improved. I used this technique in an IEEE-488 driver to detect errors in every step of the way, and it took LESS time to do it this way than to unroll the if statement because of the inherent tracing that the technique implements. 4) The functional part should be contained in a reasonable number of lines. Large while and for loops should call functions instead of having bloated bodies. Large case statements should call functions instead of running on and on and on. 5) For those statements that take compound statements (if, else, while, for, do while) the statement should ALWAYS be a compound statement. Nothing introduces bugs faster than a tired programmer not realizing that he/she is inserting a statement in the target of one of these statements and thereby replacing the target with a new one. This one issue has broken more patches in my experience than any other single item. The argument that "this introduces a blizzard of unnecessary braces" is overweighed by the guarantee that the programming coming down the pike later won't accidentally remove a target line because s/he is too tired or rushed to recognize that s/he has to ADD BRACES (and in the case of a severely nested statement where to add braces) in order to turn a single-line target into a two-line target. (Of course, some of you never make mistakes like that. Fine.) 6) When you have an "empty" statement as the compound statement, indicate it unambiguously. I have yet to find a see compiler that doesn't handle the following construct correctly: while (wait_for_condition()) {} (or, more in keeping with Linus' style without adding an extra line, "while (wait_for_condition()) {}" ) The "{}" signal (familiar to those of you who are adept with xargs) indicates that nothing is being done, and does it far more readily than a single semicolon can ever signal. If I could make just one change to the C language, I would REQUIRE that a non-empty statement or possibly empty compound statement be the only valid targets for if, else, do while, for, and while statements. 7) Name space pollution is always a problem, although in these days of computer with gigabytes of RAM it's less of a problem than it used to be. I started programming C when my main computer had 256K of RAM and the symbol table space for linking was limited. I got in the habit of using structures to minimize the number of symbols I exposed. It also disambiguates local variables and parameters from file- and program-global variables. This name-space management ensures that you don't accidentally redefine in an inner block a variable that you think is a global. In very large programs, it also avoids name collisions between portions of the project. Style has little to do with art. Style has to do with minimizing mistakes, both now and down the road. If you don't like what I do, then don't do what I do. Do what minimizes mistakes for you. And, Linus, I'm not recommending you adopt any of these suggestions -- you have your way and I have mine. If you like any of these, though, feel free to take them for your own. File off the serial numbers, and enjoy. Stephen Satchell - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT?] Coding Style
At 11:56 PM 1/22/01 +, Anton Altaparmakov wrote: At 16:42 22/01/2001, Mark I Manning IV wrote: Stephen Satchell wrote: I got in the habit of using structures to minimize the number of symbols I exposed. It also disambiguates local variables and parameters from file- and program-global variables. explain this one to me, i think it might be usefull... What might be meant is that instead of declaring variables my_module_var1, my_module_var2, my_module_var3, etc. you declare a struct my_module { var1; var2; var3; etc. }. Obviously in glorious technicolour formatting... (-; That's my interpretation anyway... The first sentence is right on the money. In addition to module variables, I define a global structure as: extern struct G { /* the real globals */ } g; and then in the main program I define the instance as "struct G g;" This is more for apps than operating systems. Further to the avoidance of pollution of the external global namespace, I define local functions as static. Indeed, in one parser I had over 1400 very small functions, none of them with external scope. Instead, I defined a structure of function pointers and exposed one name to the rest of the world. Sound stupid? Well, that stupidity had its place: the "opcode" in the pseudo-instruction stream was the offset into this structure of pointers to the pointer of interest, which made the main loop for the parser about five lines long, and not a switch statement to be seen. Three of those lines were to handle unknown-opcodes... I also am partial to arrays of function pointers when appropriate. Ever think how easy it would be to implement a TCP stack that would handle the "lamp-test packet" as a single special case? Granted, it results in a small amount of code bloat over the traditional in-line test method, but it does make you think about EVERY SINGLE ONE OF THE 64 COMBINATIONS of Urg/Ack/Psh/Rst/Syn/Fin (to use the labels from the 1985 version of RFC 793) and what they really mean. Especially the combination with all bits set. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Re: Possible C++ safe header project - Re: [Criticism] On the discussion about C++ modules
At 04:37 PM 10/23/00 +0200, Marko Kreen wrote: >* This will _not_ be accepted into standard codebase. Don't you >understand? Making headers C++ compatible is the first tiny >step for doing modules in C++. Yes, from driver/module >programmers perspective "they almost look same, and I think >C++ is cooler" they (C/C++) should be compatible, but from >kernel core's perspective they are whole different languages. My pair-o-pennies(tm): First, I agree that C++ has no place in an operating system kernel that has not been designed from the ground up with C++ in mind. I worked on kernel-level operating systems written in Pascal and in a bastard version of Fortran, and I still carry the scars from those experiences. (I do not have the same reservations about using other languages to write higher-level OS functions, that stuff that was called "middleware" in the Microsoft anti-trust case.) I am also against wrapping all the headers in C/C++ compatibility wrappers unless there is a damn good reason to. One valid reason, in my view, is to reduce the number of headers by reusing userland headers for kernel code where appropriate and useful -- observing the rule that you should never define anything more than once unless there is a damn good reason not to. Where Marko and I part company is in the thought that removing C++ keywords from kernel headers is something to avoid. I'm not talking about enabling C++ programming, I'm thinking that this may be forced by our tools on down the road. More and more C compilers are interpreting C++ language syntax, and it won't be long before you won't be able to switch off the recognition of C++ stuff when compiling C. After all, the compilers have to deal with the same bloat problem that we face in the kernel. Linus has the final say, of course, but to suggest that any changes that remove name collisions between C and C++ be rejected out of hand has the potential for shooting ourselves in the foot. I'd rather do it slowly *now* instead of having to do it wholesale when the tools force us. Not to satisfy the C++ people. To insure ourselves against obvious tool changes. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [Criticism] On the discussion about C++ modules
At 01:30 PM 10/22/00 +0200, you wrote: >Yup. And I want to try out my modules coded in Visual Cobol, APL, >and PL/I. Oh, and I want to rewrite ext2fs to use Befunge. Would that be PL/I (F) or PL/I (H}? You have different footprint problems with each of these levels. You will also need to write some glue code to translate PL/I data structures to kernel structures. Don't forget the 2000-line SED script to deal with namespace conversion in the PL/I object output. (I still have my 30-year-old PL/I documentation from my mainframe days, including the compiler program logic manual.) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [Criticism] On the discussion about C++ modules
At 01:30 PM 10/22/00 +0200, you wrote: Yup. And I want to try out my modules coded in Visual Cobol, APL, and PL/I. Oh, and I want to rewrite ext2fs to use Befunge. Would that be PL/I (F) or PL/I (H}? You have different footprint problems with each of these levels. You will also need to write some glue code to translate PL/I data structures to kernel structures. Don't forget the 2000-line SED script to deal with namespace conversion in the PL/I object output. (I still have my 30-year-old PL/I documentation from my mainframe days, including the compiler program logic manual.) - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: [OT] Re: Possible C++ safe header project - Re: [Criticism] On the discussion about C++ modules
At 04:37 PM 10/23/00 +0200, Marko Kreen wrote: * This will _not_ be accepted into standard codebase. Don't you understand? Making headers C++ compatible is the first tiny step for doing modules in C++. Yes, from driver/module programmers perspective "they almost look same, and I think C++ is cooler" they (C/C++) should be compatible, but from kernel core's perspective they are whole different languages. My pair-o-pennies(tm): First, I agree that C++ has no place in an operating system kernel that has not been designed from the ground up with C++ in mind. I worked on kernel-level operating systems written in Pascal and in a bastard version of Fortran, and I still carry the scars from those experiences. (I do not have the same reservations about using other languages to write higher-level OS functions, that stuff that was called "middleware" in the Microsoft anti-trust case.) I am also against wrapping all the headers in C/C++ compatibility wrappers unless there is a damn good reason to. One valid reason, in my view, is to reduce the number of headers by reusing userland headers for kernel code where appropriate and useful -- observing the rule that you should never define anything more than once unless there is a damn good reason not to. Where Marko and I part company is in the thought that removing C++ keywords from kernel headers is something to avoid. I'm not talking about enabling C++ programming, I'm thinking that this may be forced by our tools on down the road. More and more C compilers are interpreting C++ language syntax, and it won't be long before you won't be able to switch off the recognition of C++ stuff when compiling C. After all, the compilers have to deal with the same bloat problem that we face in the kernel. Linus has the final say, of course, but to suggest that any changes that remove name collisions between C and C++ be rejected out of hand has the potential for shooting ourselves in the foot. I'd rather do it slowly *now* instead of having to do it wholesale when the tools force us. Not to satisfy the C++ people. To insure ourselves against obvious tool changes. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Cuecat-ers: update on base-64 conversion table.
After some work, I've discovered that the base-64 decode is based on the following decode table: char map[66] = "abcdefghijklmnopqrstuvwxyz" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "0123456789" ",-\0"; The only change from the previously-published information is the graphic in the next-to-last character position in the table, position 62. The earlier documents stated this should be a plus sign; in fact it should be a comma. It took reading 400 barcode symbols before I discovered the error and determined the correction, so don't anyone feel they "blew it" -- it doesn't happen often at all, and only when you are working with Code 128 or the CatCue equivalent. When reading UPC, ISDN, and other numeric-only barcodes this particular position isn't used (not enough one-bits density). More information on cues can be found at http://www.fluent-access.com/wtpapers/cuecat/index.html, and some interesting tools for creating traditional-looking barcode presentations of cues (as opposed to the stylized slanted-bar version that you find in the Radio Shack catalog, Forbes magazine, and other places) can be found at http://www.azalea.com/QTools/ Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Cuecat-ers: update on base-64 conversion table.
After some work, I've discovered that the base-64 decode is based on the following decode table: char map[66] = "abcdefghijklmnopqrstuvwxyz" "ABCDEFGHIJKLMNOPQRSTUVWXYZ" "0123456789" ",-\0"; The only change from the previously-published information is the graphic in the next-to-last character position in the table, position 62. The earlier documents stated this should be a plus sign; in fact it should be a comma. It took reading 400 barcode symbols before I discovered the error and determined the correction, so don't anyone feel they "blew it" -- it doesn't happen often at all, and only when you are working with Code 128 or the CatCue equivalent. When reading UPC, ISDN, and other numeric-only barcodes this particular position isn't used (not enough one-bits density). More information on cues can be found at http://www.fluent-access.com/wtpapers/cuecat/index.html, and some interesting tools for creating traditional-looking barcode presentations of cues (as opposed to the stylized slanted-bar version that you find in the Radio Shack catalog, Forbes magazine, and other places) can be found at http://www.azalea.com/QTools/ Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
We are as good as our tools
At 06:40 PM 9/6/00 -0700, J. Dow wrote: >30 years of experience have proven this to me over and over again from >watching auto mechanics and ditch diggers through every engineering >discipline I have ever paused to observe. Only a damnfool eschews good >tools because of some sense of "pride" that doing it the caveman way >"forces me to think more." Let me add my 30-year's worth of experience in a number of nasty computer environments to Ms. Dow's comments. There are seven people in the world who I drove crazy by paying more attention to tools than to the "task at hand"... yet when it came to counting coup on projects that WORKED, my bosses quickly shut up about my building jigs and scaffolding and debug aids into code. In other parts of the Open Source community, people point with pride to the tools they use to do their work. Let's not forget that Ritchie decided that he needed a language to speed his development of a quick-n-dirty operating system using an old PDP-7 that had been discarded -- when the "logical" way would have been to dig right in -- using assembler, the language of the day, to try to get the job done. Interesting that the tool came first, in the system that gave birth to the effort which is the subject of this mailing list... On the subject of debuggers: All too often I have run into the situation with real-time code where the Heisenberg principle causes the system to work with the debugger in, and fail with the debugger out. Ditto with "test code" that is conditionally compiled as an aid to debugging. It's akin to a hardware engineer using 50pF capacitors to "make the prototype work" and never taking the time to understand just why adding a touch of slowdown made the circuit work. This is especially true when that "50pF capacitor" is a scope probe. Is that a good reason to "just say no" to debuggers? I don't think so. Too little reliance on debuggers and defensive code is just as bad, if not worse, than too much reliance. Debuggers are great for collecting the symptoms of the problem; it still takes thinking and role-playing to get to the heart of the problem. That thinking operator also has to know the effect of the tool he is using, just as the hardware guy has to know the effect of placing a scope probe right THERE on the circuit. Indeed, that scope probe can be a handy way to tweaking the system in subtle ways, and with analysis of the change in the symptoms that can point to the problem. "Why does placing a breakpoint THERE cause such a drastic change? AHA!" With the amount of state being thrown around in an operating system, only a good debugger in the hands of a thinking operator can isolate the fault to a particular block of code -- then the thinking operator gets off the machine and onto the source code to noodle out why the astonishing event is occurring. I'll crawl back into my writing cave now, content to watch for the moment. Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
Re: Press release - here we go again!
At 04:15 PM 9/1/00 -0700, you wrote: >And if got lost. That should tell you something. Perhaps something like >"*Advanced interface support for USB, FireWire, and AGP!" > >Then place any expostulatory text indented under that as complete sentences. >This treates the bulleted items as "titles". Your target audience dispises >incomplete sentences and clunky grammar. (And do rest assured that folks >like Jerry Pournelle have posted some REAL clunkers, worse than anything >you did) on BIX for our chuckles.) Interesting juxtaposition, Dow. Your suggestion includes a bang (exclamation mark) and then you mention the most vicious bang-hater I've ever run into, Jerry Pournelle, in the next paragraph. Whatever will you do next, Oh Emoticon Person? :) (This is a VERY serious thing -- one sign of an amateur press release is The! Excessive! Use! of! the! Ballbat! Character! -- I know columnists who stop reading and start scanning for exclamation marks when they encounter the first one. Because all too many poof-piece writers place the bang in the headline, that means the entire release is ignored. I remember when I was including some C code in an article I was writing for a magazine; the editor said to take out all of those bangs! It took 10 minutes with a copy of K to show him that the exclamation marks were operators, not emphasis. ) Satch - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/
[OT] Re: Press release - here we go again!
[Apologies in advance for polluting the linux-kernel with this, but I felt that Mr. Stone's interesting contribution deserved a response from "the other side" -- and that my response may have an impact on the people providing the source information, namely the kernel developers.] As a working journalist, founding member of the Internet Press Guild, and a kernel hacker, I thought that I should inject my pair-o-pennies(tm) into the discussion of any Press Release. First, look at http://www.netpress.org/careandfeeding.html, the IPG "Care and Feeding of the Press" for tips and guidelines on how to get a press released noticed by the Internet-beat journalist. When writing the press release, you need to keep the following things in mind: 1) KEEP IT SHORT. Journalists will throw out press releases that, when printed double-spaced on standard 8.5x11 paper, exceed 1.5 pages of text. Not everything is of equal importance or "gee-whiz-ness". Pick two or three major features to punch, and then provide a URL for the rest. I strongly suggest that Alex and Linus caucus and determine what the "top features" should be in such an announcement. 2) Be careful about who you say is the source. While Linus is well-known as the "father of Linux" we need to avoid overloading him by making him the focus of press questions. This is especially true when you consider the next point... 3) There MUST be a press contact for questions and follow-up. A press contact consists of a name and a telephone number -- a mail address just won't work. Further, that person had better be able to work both East and West coast time. It's a big job. In your suggested release, there is no place for the hapless and clueless journalist to even begin to get information so that s/he can begin meaningful research. Please remember that story pitches (the proposal by a reporter that something be covered) get perhaps 10-15 minutes of time; this is because a journalist may pitch 20-30 stories and only write 3 of them. 5) Even better would be to obtain the services of a PR firm used to dealing with high-tech questions -- if you would like a list of potential sponsors I can poll the IPG to see who might be likely candidates. Off the top of my head, I would recommend Pam Alexander, Hastings & Humbolt, Marty Winston, and S 6) Prepare for your press contact(s) a FAQ sheet -- this could be a compendium of existing FAQs, but it should also include the "stupid" questions that a newbie will think of that no sane FAQ maintainer would include. This should be as inclusive as possible. The FAQ should be made available online as well. I'll go on a limb and say I'd be happy to review any FAQs that may exist to see what stupid questions are missing. 7) Organize background information. WHO is the kernel development team? HOW does the kernel get developed, tested, shaken down, bug-reported? WHAT are the new features and WHY are they important? (I don't have a WHEN because things run a little loose around here.) I know that many, many feature articles have been written that try to answer these questions. MINE THOSE PAST ARTICLES, and improve the accuracy of that information. It would help if there were one widely-recognized place for journalists to go to find everything. My personal suggestion would be to open a section on http://www.linuxhq.com/, then publicize the hell out of it. We now return you to the current kernel religious war, already in progress... Stephen Satchell - To unsubscribe from this list: send the line "unsubscribe linux-kernel" in the body of a message to [EMAIL PROTECTED] Please read the FAQ at http://www.tux.org/lkml/