Re: good DoS / DDoS detection tool
Try mod_dosevasive on a google search if you're looking for something to protect apache - Original Message - From: Chad Adlawan [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Thursday, 20 January, 2005 11:45 PM Subject: good DoS / DDoS detection tool Good Day! Can anyone recommend a good DoS / DDoS tool. preferably something packaged in Debian stable/frozen already - with maybe capability to refuse traffic from suspected attackers. TIA, Chad -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: PHP 4.1.2
I was wondering... are you guys concerned about the latest PHP vulnerabilities, which affect the Debian stable 4.1.2? It seems that woodys php4 package isn't affected. http://lists.debian.org/debian-security/2004/12/msg00090.html Norbert This is excellent news! However, I wonder how it is possible, since the advisory specifically stated that our versions were vulnerable... -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
PHP 4.1.2
Hi all, I was wondering... are you guys concerned about the latest PHP vulnerabilities, which affect the Debian stable 4.1.2? How are you handling it? Debian Security Team still hasn't released any patches, so concerned and worried about this. Or perhaps you guys think there is no need to worry? Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: PHP 4.1.2
We're all worried. There are 2 threads going on in debian-security about this issue: http://lists.debian.org/debian-security/2004/12/msg00044.html http://lists.debian.org/debian-security/2004/12/msg00047.html ... http://lists.debian.org/debian-security/2004/12/msg00054.html Just read all that... not particularly encouraging, as it seems no one is interested in backporting the security fixes or that it is not possible to backport them. I heard there are some kind of mod_rewrite rules to temporarily resolve this in the mean time posted in BUGTRAQ or similar. Do you run any way of mitigating the security threat in the mean time? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: PHP 4.1.2
I've been using backport's 4.3.10 packages in production without problems. i had problems with Invision power board but that was fixed by upgrading to the latest version of Zend. I am a little disappointed with debian on this update, i thought we would have got an update by now Little bugfixes and even local exploits... okay... i can understand there is less urgency. But for REMOTELY exploitable vulnerabilities, i think there is a much greater urgency and importance. I wish we could get an update if they are even _WORKING_ on a PHP update, or if they have just thrown in the towel and said we're not going to patch this. If thats the case, we'll upgrade, but not otherwise. Anyone have any updates if they are even trying to patch or if not? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: PHP 4.1.2
--On Wednesday, December 22, 2004 23:42 +0100 Philipp Kern [EMAIL PROTECTED] wrote: In my opinion it is not worth to backport PHP 4.3 to stable as sarge *should* be released as soon as security team support is available. Sarge is taking an extremely long time to get out the door. Been nearly a year where 'soon' is the answer. Leaving boxes vulnerable in the mean time is not really an option, especially for ISPs and hosting companies. I'd just like to have at least an indication if anyone even _wants_ to fix the existing PHP version, or... not. The existing version 4.1 from the stable branch has been going well for a long time now. Do you know of any problems or incompatibilities in moving to 4.3 branch? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: initrd in Debian kernel-image
the basic rule of thumb is: if i'm likely to need it to boot or if it's essential for what the machine is supposed to do, then it gets compiled in to the kernel. otherwise as a module. craig Agree completely. In or case, we also compile in the 3ware RAID stuff, a few common NIC drivers like the cheapo NE2000 or similar so we can drop in a rubbish card if the Intel or 3com cards fail. In my experience, putting essentials built-into the kernel is wise, as they tend to have much less chance of fcsking up than modules. YMMV. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Networking Between eth0 eth1
From: Johnno [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Thursday, 14 October, 2004 4:34 PM Subject: Networking Between eth0 eth1 Hello, I am running Debian Woody and have two ethernet cards in the computer.. eth0 is connected to the internet, eth1 is connected to my local network.. how do I get my local network to access the internet on eth0?? The Debian box works fine and I can surf, download from the box, But I am wanting to be able to do the same on the local network connected to eth1.. Many thanks, Johnno Look into network bridging, ip masqing, NAT, all of which you can find on Google and the howtos. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID-1 to RAID-5 online migration?
On Tue, Sep 07, 2004 at 12:06:13AM -0400, Chris Wagner wrote: If ur looking for a fast RAID product that's reasonably priced I'ld take a look at NetCell's SyncRAID product (http://www.netcell.com/) which uses a 64 bit RAID-3 variant they call RAID XL. It got a good review from Tom's Hardware Guide and it looks like they've really solved the read-calc-write problem of RAID-5. looks good, but is it supported in linux? the web site says not: Currently only supports Windows XP, 2000, 2003 Retail Box includes PCI card, CD, Manual, IDE cables. I'm guessing since it is completely OS transparent it should work... not that I have used it. I have been wondering about the merits of using OS-transparent RAID solutions as that would allow easy migration between systems. Any thoughts on this? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: backup DNS question
We have: ns1.lctc.org ns2.lctc.org ns2.lctc.org is (aparently) down. It is in a locked and alarmed building. How is this effecting users of our DNS? This shouldn't affect them... that is the idea of having a minimum of 2 DNS servers, so that in the event one failed, the other will continue operating. Of course, there is broken software out there that will not query the 2nd DNS server, but that is pretty rare, as most people will be using their ISP's nameservers, and they are usually not broken. Hope that helps! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: max requests a celeron web server can handle
I have another question - what is the optimal max. keep-alive time? Because as I can see from Apache's /server-status page on our server, there are usually about 10-15 processes in state S (Sending Reply) and another 40-50 in K (keepalive). In have lowered this time from 15 seconds to 10. Is there any optimal setting? Alternatively, turn keep alive off completely. Having keep alive on means that if a person then clicks on something on the webpage within 10 (or 15) seconds then it will load quickly as it doesn't need to spawn new children to handle it. However each visitor can tie up 5 or 10 such processes, so you can imagine the problem when there are many concurrent visitors. We just turn it off, and it works well. What is the default browser behavior? I load www.somewebsite.com, with 20 little images. Browser makes few connections to server, fetches all images and the connections stay in keepalive state. When I click some link on that web page, does the browser try to verify all the (same) images again? Or just fetches the new page and maybe some new image? Depends on their cache settings, if there is a proxy in front of the person (many ISPs have transparent proxies), and whether you set pragma: no cache... but usually the web browser tries to get the least amount of stuff if possible. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: What is GreyListing
If you have to go through one 4xx messages to send a message then it takes twice the network bandwidth to send a spam and more than twice the effort (queues have to be maintained etc). If you were to require more than one 4xx message and a longer time-out then it makes it even more work for the spamming machine and thus reduces the volume of spam that can be sent before the machine is put on black-lists and/or shut down. In my experience, you have 2 situations: 1) an open relay in this case, the violated mail server will keep track of the spams it is sending, and will retry again later. However, most times the server being violated will slow down under the load of spams being sent, and the admin will notice this and close the relay. So the second or third attempts never get done because the admin closes the relay before they are re-tried. 2) spammer server / open proxy in this case, there is usually no queue as the spammer uses custom software to belt out as many emails per minute/hour as possible. Usually it has a huge list of emails, and a selling point is the thoughtput of emails per minute. These programs usually try to minimize bandwidth usage per email, so no re-tries are done and the connection timeout is pretty short, and give up quickly. I notice this when spammer servers connect to our servers. Overall, it is a good thing to make them retry as Russell said, because most times no second or third attempt is ever made! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Trusting Backports and unofficial Repositories
I'm currently using backports.org and dotdeb.org in production. I am also using backports.org and have been for a long time on quite a few servers. Admittedly we only use it for things like Spamassassin and nothing hugely mission critical like kernels, but so far the packages have been of high quality. The combination of woody/stable and some updated packages (such as Spamassassin, which needs to keep up to date to date with the spammers) has been very effective. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Cheap Mainboard whith autostart ?
Well, most of the ASUS motherboards we use come with a Power option that allows you to resume previous status after power loss. That means if the power was off to start with, and there is a power failure, it stays off. It if had power beforehand, it reboots back up. Hope that helps! Jas - Original Message - From: Michelle Konzack [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, 17 July, 2004 8:33 PM Subject: Cheap Mainboard whith autostart ? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: restricting sftp/ssh login access
how about using rbash? Only does the shell part, and it is not very hard to break out of the jail, but then again, allowing shell when you think users are going to purposely try to break it isn't a good idea... -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: restricting sftp/ssh login access
how about using rbash? Only does the shell part, and it is not very hard to break out of the jail, but then again, allowing shell when you think users are going to purposely try to break it isn't a good idea...
Re: Which Spam Block List to use for a network?
most ISPs (and mail service providers like yahoo and hotmail), for instance, will never have SPF records in their DNS. they may use SPF checking on their own MX servers, but they won't have the records in their DNS. their users have legitimate needs to send mail using their address from any arbitrary location, which is exactly what SPF works to prevent. This also applies to most hosting companies. If your ISP prevents outgoing SMTP (port 25) to other mail servers and you are forced to use your ISP's mail servers, then the mail server is not going to match that of your hosting account or domain name. Thus SPF fails again in this case. SPF is useful and a *part* of the solution for *some* of the problem. it is not a magic bullet. I feel SPF is not going to be implemented many placed not because people don't wont to reduce spam, but because SPF just won't work in many cases. In fact, depending on how you look at it, it doesn't reduce spam at ALL (phising is certainly bad, but that is a separate problem). Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Which Spam Block List to use for a network?
most ISPs (and mail service providers like yahoo and hotmail), for instance, will never have SPF records in their DNS. they may use SPF checking on their own MX servers, but they won't have the records in their DNS. their users have legitimate needs to send mail using their address from any arbitrary location, which is exactly what SPF works to prevent. This also applies to most hosting companies. If your ISP prevents outgoing SMTP (port 25) to other mail servers and you are forced to use your ISP's mail servers, then the mail server is not going to match that of your hosting account or domain name. Thus SPF fails again in this case. SPF is useful and a *part* of the solution for *some* of the problem. it is not a magic bullet. I feel SPF is not going to be implemented many placed not because people don't wont to reduce spam, but because SPF just won't work in many cases. In fact, depending on how you look at it, it doesn't reduce spam at ALL (phising is certainly bad, but that is a separate problem). Jas
Re: Which Spam Block List to use for a network?
I've used (through notespam) for my own private email, the following lists: Visi (relays.visi.com); ORDB (relays.ordb.org); SpamCop (bl.spamcop.net); dorkslayers (orbs.dorkslayers.com). Pretty good list... ecept for dorkslayers. In general, for an ISP or hosting provider (or anyone who handles large volumes of email) you should NOT go with the controversial lists on a global scale, or ones where it is impossible to get out of. The reason being that you want to minimize false positives, even if this means a few extra spams get through. You cannot afford to have a CEO's email mistakenly blocked as spam. The best way to do this is to go with most of the open relay and open proxy lists. So that would be visi and ordb (you already got those) PLUS opm.blitzed.org and xbl.spamhaus.org. These two are also open proxy lists, although opm and xbl I think have the same content (so check to make sure, so you don't do double queries and waste bandwidth and others resources). SpamCop works fine for my own email, where most people are whitelisted, but is said [1] not to be suitable for a production environment and what we have here is precisely that... [1]:http://www.spamcop.net/bl.shtml Spamcop is okay... it has some controversial blocks such as Internetseer. I never asked for their email, but they got it somehow... well, anyway, some say they are hardcore spammers, some not. But Spamcop in general gets most of the US spam. However, it doesn't seem to catch much Korean/China spam... so YMMV. Since I've only used this sort of thing at personal email level I'm wondering if anyone here could provide me with information over which would be a responsible and unbiased [*] Block List for an *international* production environment. [*]: Several Block Lists seem to be highly biased, if not prejudiced, in the sense that they will easily block huge chunks of IP space from some countries but will hardly do so for ISPs within other countries. Certainly avoid ALL country block lists, and block lists that include large chunks of IPs. This may include SPEWS and SBL. They are okay in a weighting system (such as with Spamassassin) but not good if you're using them to block outright (especially Spews and false positives). SBL is better than Spews, although less aggressive. Better to do the open relays and proxy blocking at the server level, and let people block the rest (eg. block all China, block all Asia, block all Europe, Spews, etc.) in a client/personal level. That is the best solution we have found. You can also find a very good list of RBL Spam lists at: http://www.declude.com/Articles.asp?ID=97 and it even has warnings and brief descriptions. I find it very useful to keep updated on whats new and whats good. Hope this helps! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Which Spam Block List to use for a network?
I've used (through notespam) for my own private email, the following lists: Visi (relays.visi.com); ORDB (relays.ordb.org); SpamCop (bl.spamcop.net); dorkslayers (orbs.dorkslayers.com). Pretty good list... ecept for dorkslayers. In general, for an ISP or hosting provider (or anyone who handles large volumes of email) you should NOT go with the controversial lists on a global scale, or ones where it is impossible to get out of. The reason being that you want to minimize false positives, even if this means a few extra spams get through. You cannot afford to have a CEO's email mistakenly blocked as spam. The best way to do this is to go with most of the open relay and open proxy lists. So that would be visi and ordb (you already got those) PLUS opm.blitzed.org and xbl.spamhaus.org. These two are also open proxy lists, although opm and xbl I think have the same content (so check to make sure, so you don't do double queries and waste bandwidth and others resources). SpamCop works fine for my own email, where most people are whitelisted, but is said [1] not to be suitable for a production environment and what we have here is precisely that... [1]:http://www.spamcop.net/bl.shtml Spamcop is okay... it has some controversial blocks such as Internetseer. I never asked for their email, but they got it somehow... well, anyway, some say they are hardcore spammers, some not. But Spamcop in general gets most of the US spam. However, it doesn't seem to catch much Korean/China spam... so YMMV. Since I've only used this sort of thing at personal email level I'm wondering if anyone here could provide me with information over which would be a responsible and unbiased [*] Block List for an *international* production environment. [*]: Several Block Lists seem to be highly biased, if not prejudiced, in the sense that they will easily block huge chunks of IP space from some countries but will hardly do so for ISPs within other countries. Certainly avoid ALL country block lists, and block lists that include large chunks of IPs. This may include SPEWS and SBL. They are okay in a weighting system (such as with Spamassassin) but not good if you're using them to block outright (especially Spews and false positives). SBL is better than Spews, although less aggressive. Better to do the open relays and proxy blocking at the server level, and let people block the rest (eg. block all China, block all Asia, block all Europe, Spews, etc.) in a client/personal level. That is the best solution we have found. You can also find a very good list of RBL Spam lists at: http://www.declude.com/Articles.asp?ID=97 and it even has warnings and brief descriptions. I find it very useful to keep updated on whats new and whats good. Hope this helps! Jas
Re: Intel Hyperthreading problem on server?
We're running Debian with a custom 2.4.26 kernel on a couple of dual Xeon's, with apache 1.3.x without any problem. I'll admit that these are ligtly loaded servers for now, but we've done some stress testing before they went into production and never saw this problem. Maarten Did you have modssl installed as a module as well? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
We're running Debian with a custom 2.4.26 kernel on a couple of dual Xeon's, with apache 1.3.x without any problem. I'll admit that these are ligtly loaded servers for now, but we've done some stress testing before they went into production and never saw this problem. Maarten Did you have modssl installed as a module as well?
Re: Intel Hyperthreading problem on server?
Hi Gilles , Unfortunately, I never did. The solution was to disable Hyperthreading altogether unfortunately. Perhaps others have had more luck? - Original Message - From: gilles.hanotel [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, 16 June, 2004 5:13 AM Subject: Re: Intel Hyperthreading problem on server? Hello, I've just found your message about hyperthreading problem with apache on the debian list and I've exactly the same problem on my boxe... As I've not seen answer on the list I would like to know if you have found something interesting to correct thie issue... Thank you in advance. -- Gilles HANOTEL -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Dear Gilles , I'll try as well... hope we can find a solution. I have a few Redhat Linux 9 servers with Hyperthreading CPUs, and no problem whatsoever. I think they run Apache 2, so maybe that is the solution... but surely there must be people running Apache 1.x without any problem and hyperthreading?! Jas - Original Message - From: gilles.hanotel [EMAIL PROTECTED] To: Jason Lim [EMAIL PROTECTED] Sent: Wednesday, 16 June, 2004 6:49 AM Subject: Re: Intel Hyperthreading problem on server? Hi Jason, Unfortunately, I never did. The solution was to disable Hyperthreading altogether unfortunately. Perhaps others have had more luck? Google don't think so :( I have two servers with the same hardware. One with hyperthreading enable and one without. As soon as there is a little load the one with hyperthreading shows a lot of blocked process.. Perhaps there is an smp race condition with apache. I have a notebook with hyperthreading and i use it as a workstation whithout any problem for months now... Still searching, if i find something I'll tell you ;-) Thanks -- Gilles HANOTEL -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Hi Gilles , Unfortunately, I never did. The solution was to disable Hyperthreading altogether unfortunately. Perhaps others have had more luck? - Original Message - From: gilles.hanotel [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, 16 June, 2004 5:13 AM Subject: Re: Intel Hyperthreading problem on server? Hello, I've just found your message about hyperthreading problem with apache on the debian list and I've exactly the same problem on my boxe... As I've not seen answer on the list I would like to know if you have found something interesting to correct thie issue... Thank you in advance. -- Gilles HANOTEL
Re: how to relocate servers transparently
The biggest problem you will have is with the DNS. Set 1 of the DNS servers to the new IP, and keep 1 behind. Make sure the TTL is low... very low. Then, make sure the new DNS server on the new IP address is up and running with the old DNS server on the old IP (if possible), so at all times there is at least 1 DNS server running and active. Then do the switch. This way should help you minimize downtime. Jas - Original Message - From: Rhesa Rozendaal [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, 14 June, 2004 5:36 PM Subject: how to relocate servers transparently Hello all, I'm looking for some practical advice on moving servers from one colocation to another. In a couple of weeks, we need to move our servers to a different colocation, which means all the ip addresses will change. The servers are running regular stuff: web, mail, ftp, and two dns servers. In the past I witnessed such a move, and there were a lot of problems with the DNS. As it turned out, many DNS servers out there kept caching the old ip addresses for over 3 days, causing a lot of connection issues for many users. Beforehand we did lower the ttl on all the domains prior to the move, but many dns servers seemed to ignore that. On top of that, we moved both our dns servers at the same time, which was a big mistake too. So, what I'd like to hear from you is practical advice on how to avoid connection problems after the move is complete. Will it be enough to keep 1 dns server behind? I'm afraid it won't be, given the dns caching problem mentioned above. Is there a way to have that 1 dns server act as a proxy or port forwarder in some way? Can that be done between two different class A networks? Btw, the servers are running debian stable (woody). Thanks in advance, Rhesa Rozendaal -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: how to relocate servers transparently
The biggest problem you will have is with the DNS. Set 1 of the DNS servers to the new IP, and keep 1 behind. Make sure the TTL is low... very low. Then, make sure the new DNS server on the new IP address is up and running with the old DNS server on the old IP (if possible), so at all times there is at least 1 DNS server running and active. Then do the switch. This way should help you minimize downtime. Jas - Original Message - From: Rhesa Rozendaal [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Monday, 14 June, 2004 5:36 PM Subject: how to relocate servers transparently Hello all, I'm looking for some practical advice on moving servers from one colocation to another. In a couple of weeks, we need to move our servers to a different colocation, which means all the ip addresses will change. The servers are running regular stuff: web, mail, ftp, and two dns servers. In the past I witnessed such a move, and there were a lot of problems with the DNS. As it turned out, many DNS servers out there kept caching the old ip addresses for over 3 days, causing a lot of connection issues for many users. Beforehand we did lower the ttl on all the domains prior to the move, but many dns servers seemed to ignore that. On top of that, we moved both our dns servers at the same time, which was a big mistake too. So, what I'd like to hear from you is practical advice on how to avoid connection problems after the move is complete. Will it be enough to keep 1 dns server behind? I'm afraid it won't be, given the dns caching problem mentioned above. Is there a way to have that 1 dns server act as a proxy or port forwarder in some way? Can that be done between two different class A networks? Btw, the servers are running debian stable (woody). Thanks in advance, Rhesa Rozendaal -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Chkrootkit - true/false ?
Checking `lkm'... You have 3 process hidden for readdir command You have 3 process hidden for ps command Warning: Possible LKM Trojan installed Sometimes chkrootkit returns nothing detected and every time rkhunter tells me nothing is wrong. Is this a false positive with chkrootkit and debian woody? No. I dont get that error. What I can note is that one time one ofthe servers got stuffed up for some reason (the RAID array borked at the wrong moment or something) and something weird happened to /proc or such. We actually didn't know this at the time, so we ran chkrootkit (the backports.org version) and found a similar error to your's. We were all frantic, checking the backups and everything, until we checked the logs and saw RAID error. We rebooted the server and re-ran chkrootkit and all was fine. This certainly does not mean the same in your case, but just though you might want to know. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Chkrootkit - true/false ?
Checking `lkm'... You have 3 process hidden for readdir command You have 3 process hidden for ps command Warning: Possible LKM Trojan installed Sometimes chkrootkit returns nothing detected and every time rkhunter tells me nothing is wrong. Is this a false positive with chkrootkit and debian woody? No. I dont get that error. What I can note is that one time one ofthe servers got stuffed up for some reason (the RAID array borked at the wrong moment or something) and something weird happened to /proc or such. We actually didn't know this at the time, so we ran chkrootkit (the backports.org version) and found a similar error to your's. We were all frantic, checking the backups and everything, until we checked the logs and saw RAID error. We rebooted the server and re-ran chkrootkit and all was fine. This certainly does not mean the same in your case, but just though you might want to know. Jas
Re: You can start saving now
You have to weigh up the pros and cons of this. Presumably lists.debian.org already uses some kind of spam filtering, such as using ordb.org or spamcop.net or something to filter spamming IPs outright? Then on your end, you can run Spamassassin that will look at the content (i presume lists.debian.org server does not have enough resources to run Spamassassin to do content-based filtering?), and it will tag it as spam or whatever else you want it to do. Thus therefore any spam that does come through debian is still marked as spam anyway. - Original Message - From: Richard Zuidhof [EMAIL PROTECTED] To: [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, 10 May, 2004 4:09 AM Subject: Re: You can start saving now Somebody please make this list member-only. I am sick of the spam I receive through this list, it is my main source of spam. Richard --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.677 / Virus Database: 439 - Release Date: 4-5-2004 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: You can start saving now
You have to weigh up the pros and cons of this. Presumably lists.debian.org already uses some kind of spam filtering, such as using ordb.org or spamcop.net or something to filter spamming IPs outright? Then on your end, you can run Spamassassin that will look at the content (i presume lists.debian.org server does not have enough resources to run Spamassassin to do content-based filtering?), and it will tag it as spam or whatever else you want it to do. Thus therefore any spam that does come through debian is still marked as spam anyway. - Original Message - From: Richard Zuidhof [EMAIL PROTECTED] To: debian-isp@lists.debian.org; [EMAIL PROTECTED] Sent: Monday, 10 May, 2004 4:09 AM Subject: Re: You can start saving now Somebody please make this list member-only. I am sick of the spam I receive through this list, it is my main source of spam. Richard --- Outgoing mail is certified Virus Free. Checked by AVG anti-virus system (http://www.grisoft.com). Version: 6.0.677 / Virus Database: 439 - Release Date: 4-5-2004 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: bdflush or others affecting disk cache
Because it's more efficient to use swap space to hold stuff from RAM that is not currently being used. But that means the Kernel is making the assumption it can cache the swap data more efficiently than just putting that data in RAM as the software requests? I'd take a whack at this and would think that cache misses would result in lower performance at this? Linux will happily shift process memory into swap to make more room for buffers. Why keep 100M worth of not-currently-active daemon in RAM when there is a process trying to buffer the whole disk? I agree... but the swap space usage is constantly changing... so I guess that means VM is making a poor decision as to what is not-currently-active... swapping out stuff that then needs to be read back or written, causing the disk thrashing? Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap No. It is faster to use that memory for buffers if the system is being disk bound. Well, it is being disk bound because it is constantly using swap... causing it to be disk bound... causing the system to increase cache size... causing more swap usage... etc. Anyone see this before? - Original Message - From: Donovan Baarda [EMAIL PROTECTED] To: Jason Lim [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Monday, April 19, 2004 08:28 AM Subject: Re: bdflush or others affecting disk cache On Mon, 2004-04-19 at 09:31, Jason Lim wrote: Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. [...] The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. [...] Because it's more efficient to use swap space to hold stuff from RAM that is not currently being used. Linux will happily shift process memory into swap to make more room for buffers. Why keep 100M worth of not-currently-active daemon in RAM when there is a process trying to buffer the whole disk? Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap No. It is faster to use that memory for buffers if the system is being disk bound. usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! The VM management code in Linux is something that is constantly getting tweaked and re-written. 2.4.20 is quite old now, and it wouldn't surprise me if the current 2.4.26 kernel has had the VM significantly improved since then. The performance hits you are seeing are probably because a process is walking through the disk. The 2.4.20 VM system may not be handling it as gracefully as it could, but I bet there is a process doing heaps of disk reads that is triggering it. -- Donovan Baarda [EMAIL PROTECTED] http://minkirri.apana.org.au/~abo/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: bdflush or others affecting disk cache
Followup: interesting results. I've now tried removing the swap altogther (swapoff) and the server appears to be running much smoother and faster. Here is the new top info: 212 processes: 210 sleeping, 2 running, 0 zombie, 0 stopped CPU states: 8.4% user, 32.2% system, 0.9% nice, 58.2% idle Mem: 1027212K av, 1015440K used, 11772K free, 0K shrd, 186196K buff Swap: 0K av, 0K used, 0K free 370588K cached by the way, most of the processes are httpd and mysql (this is a hosting server). I'm somewhat concerned about having no swap though... any side-effects of running with no swap? As expected, most of the swapped data reverted to RAM by reducing the cache size (by approximately the amount that was used by swap). Hope someone can shed some light on this. I'm looking at the results, but can't understand why it is swapping so aggressively... to the point that it is running itself out of RAM for active programs to increase cache size. Jas - Original Message - From: Jason Lim [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Monday, 19 April, 2004 7:31 AM Subject: bdflush or others affecting disk cache Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. Here is the output of top: Mem: 1027212K av, 1018600K used,8612K free, 0K shrd, 70728K buff Swap: 2097136K av, 35556K used, 2061580K free 690140K cached and the output of free: total used free sharedbuffers cached Mem: 10272121016256 10956 0 71528 683956 -/+ buffers/cache: 260772 766440 Swap: 2097136 346922062444 The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. I've searched high and low on Google... using phrases like linux kernel proc cache, buffers, bdflush, etc. but I still can't explain this. Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! Thanks in advance. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: bdflush or others affecting disk cache
Because it's more efficient to use swap space to hold stuff from RAM that is not currently being used. But that means the Kernel is making the assumption it can cache the swap data more efficiently than just putting that data in RAM as the software requests? I'd take a whack at this and would think that cache misses would result in lower performance at this? Linux will happily shift process memory into swap to make more room for buffers. Why keep 100M worth of not-currently-active daemon in RAM when there is a process trying to buffer the whole disk? I agree... but the swap space usage is constantly changing... so I guess that means VM is making a poor decision as to what is not-currently-active... swapping out stuff that then needs to be read back or written, causing the disk thrashing? Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap No. It is faster to use that memory for buffers if the system is being disk bound. Well, it is being disk bound because it is constantly using swap... causing it to be disk bound... causing the system to increase cache size... causing more swap usage... etc. Anyone see this before? - Original Message - From: Donovan Baarda [EMAIL PROTECTED] To: Jason Lim [EMAIL PROTECTED] Cc: debian-isp@lists.debian.org Sent: Monday, April 19, 2004 08:28 AM Subject: Re: bdflush or others affecting disk cache On Mon, 2004-04-19 at 09:31, Jason Lim wrote: Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. [...] The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. [...] Because it's more efficient to use swap space to hold stuff from RAM that is not currently being used. Linux will happily shift process memory into swap to make more room for buffers. Why keep 100M worth of not-currently-active daemon in RAM when there is a process trying to buffer the whole disk? Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap No. It is faster to use that memory for buffers if the system is being disk bound. usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! The VM management code in Linux is something that is constantly getting tweaked and re-written. 2.4.20 is quite old now, and it wouldn't surprise me if the current 2.4.26 kernel has had the VM significantly improved since then. The performance hits you are seeing are probably because a process is walking through the disk. The 2.4.20 VM system may not be handling it as gracefully as it could, but I bet there is a process doing heaps of disk reads that is triggering it. -- Donovan Baarda [EMAIL PROTECTED] http://minkirri.apana.org.au/~abo/ -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: bdflush or others affecting disk cache
Followup: interesting results. I've now tried removing the swap altogther (swapoff) and the server appears to be running much smoother and faster. Here is the new top info: 212 processes: 210 sleeping, 2 running, 0 zombie, 0 stopped CPU states: 8.4% user, 32.2% system, 0.9% nice, 58.2% idle Mem: 1027212K av, 1015440K used, 11772K free, 0K shrd, 186196K buff Swap: 0K av, 0K used, 0K free 370588K cached by the way, most of the processes are httpd and mysql (this is a hosting server). I'm somewhat concerned about having no swap though... any side-effects of running with no swap? As expected, most of the swapped data reverted to RAM by reducing the cache size (by approximately the amount that was used by swap). Hope someone can shed some light on this. I'm looking at the results, but can't understand why it is swapping so aggressively... to the point that it is running itself out of RAM for active programs to increase cache size. Jas - Original Message - From: Jason Lim [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Monday, 19 April, 2004 7:31 AM Subject: bdflush or others affecting disk cache Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. Here is the output of top: Mem: 1027212K av, 1018600K used,8612K free, 0K shrd, 70728K buff Swap: 2097136K av, 35556K used, 2061580K free 690140K cached and the output of free: total used free sharedbuffers cached Mem: 10272121016256 10956 0 71528 683956 -/+ buffers/cache: 260772 766440 Swap: 2097136 346922062444 The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. I've searched high and low on Google... using phrases like linux kernel proc cache, buffers, bdflush, etc. but I still can't explain this. Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! Thanks in advance. Jas
bdflush or others affecting disk cache
Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. Here is the output of top: Mem: 1027212K av, 1018600K used,8612K free, 0K shrd, 70728K buff Swap: 2097136K av, 35556K used, 2061580K free 690140K cached and the output of free: total used free sharedbuffers cached Mem: 10272121016256 10956 0 71528 683956 -/+ buffers/cache: 260772 766440 Swap: 2097136 346922062444 The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. I've searched high and low on Google... using phrases like linux kernel proc cache, buffers, bdflush, etc. but I still can't explain this. Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! Thanks in advance. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
bdflush or others affecting disk cache
Hi all, I've been banging my head on this one for a while now on a 2.4.20 system. Here is the output of top: Mem: 1027212K av, 1018600K used,8612K free, 0K shrd, 70728K buff Swap: 2097136K av, 35556K used, 2061580K free 690140K cached and the output of free: total used free sharedbuffers cached Mem: 10272121016256 10956 0 71528 683956 -/+ buffers/cache: 260772 766440 Swap: 2097136 346922062444 The problem is that swap usage can grow to 100Mb... yet the buffers and cache remain at astoundingly high levels. I can actually see memory to cache and buffers increasing and at the same time see it increasing swap usage! What I don't get is why the system... with about 700Mb in cache and 70Mb in buffers, is using swap space at all. I've searched high and low on Google... using phrases like linux kernel proc cache, buffers, bdflush, etc. but I still can't explain this. Wouldn't it be far, FAR faster for the system to reduce the cache by about 100Mb or so instead of swapping that 100Mb to disk? And note that the swap usage is constantly fluctuating, so you can see the performance problem this is causing. Any ideas?! Thanks in advance. Jas
Re: Fixed (hardisk) device names?
If i now must remove the first harddisk (/dev/hda) the second (/dev/hdb) will be renamed to (/dev/hda) after the reboot. As i want /dev/hdb to be that's EXACTLY what linux does for IDE drives. the slave drive on the primary IDE controller will *always* be /dev/hdb, regardless of whether there is a master drive or not. /dev/hda - master drive on primary IDE controller /dev/hdb - slave drive on primary IDE controller /dev/hdc - master drive on secondary IDE controller /dev/hdd - slave drive on secondary IDE controller Is this possible? it's standard. I think that is his point... but it doesn't do that for him. Apparently... he has a master drive (hda) and slave drive (hdb) on the primary IDE controller... but if he then removed the master drive, then suddenly the slave drive becomes hda! Correct me if i'm wrong... :-) Personally i've never seen that happen. The ONLY thing i could think of... is to specifically set the jumpers on the HDs to FORCE one hard disk to be master, and the other to be slave. That way, it is IMPOSSIBLE for the system to get it wrong. Do not rely on the cable select jumper. don't use dd for that. set up a raid-1 mirror instead. it's easy to do, only about 5 minutes work. If only it was really so easy... personally, i use 3ware cards... but just recently one of the 3ware cards barfed, and turned a RAID 1 (with 2 HDs and 1 spare) somehow into a RAID 1 with 2 drives (the 1 HD and the spare) AND another RAID 1 with 1 drive (which used to be part of the original RAID 1). Ever seen something like this before? I was looking at MONDO for a solution to this... but it does not appear that MONDO will be able to resolve this very well at all and adds a whole level of complexity to the setup. I was thinking... perhaps a solution would be to setup a RAID 1 between the 3ware RAID 1 and a large IDE HD. Would that be a good workaround in case of catastrophic failure on the 3ware RAID? also, for performance and safety, put your second drive on a separate IDE controller. that way it will still work even if one IDE controller fails. e.g. have /dev/hda (primary IDE master) and /dev/hdc (secondary IDE master) rather than /dev/hda /dev/hdb. That is always a good suggestion. Even if the cable had a problem both drives won't be affected... the only cost to do this is that of an extra IDE cable, so no reason not to!
Re: backup script
From: http://tldp.org/LDP/abs/html/textproc.html try the cut command. Sounds like it does just what you want. -J - Original Message - From: Alexandros Papadopoulos [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, 10 March, 2004 5:06 PM Subject: Re: backup script On Wednesday 10 March 2004 09:29, Craig Schneider wrote: snip Just battling to use awk to extract the last for collumns. -rwxrwxr-x 1 root root [ 234 Mar 10 06:38 backup ] Any help would be greatly appreciated. Shell scripting is definitely not one of my strong points. http://tldp.org/guides.html Look for Advanced Bash Shell Scripting guide - it'll cover all you need. Especially for awk, http://sparky.rice.edu/~hartigan/awk.html is also interesting. -A -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: backup script
From: http://tldp.org/LDP/abs/html/textproc.html try the cut command. Sounds like it does just what you want. -J - Original Message - From: Alexandros Papadopoulos [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Wednesday, 10 March, 2004 5:06 PM Subject: Re: backup script On Wednesday 10 March 2004 09:29, Craig Schneider wrote: snip Just battling to use awk to extract the last for collumns. -rwxrwxr-x 1 root root [ 234 Mar 10 06:38 backup ] Any help would be greatly appreciated. Shell scripting is definitely not one of my strong points. http://tldp.org/guides.html Look for Advanced Bash Shell Scripting guide - it'll cover all you need. Especially for awk, http://sparky.rice.edu/~hartigan/awk.html is also interesting. -A -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: POP3 accounts
You might also want to check out: http://www.qmail.org/ and vpopmail Debian package. The basic idea is that you don't use real usernames that exist on the server, but instead create fake ones (such as a user called [EMAIL PROTECTED]) just for checking pop3 email. Do some reading... also check out http://www.lifewithqmail.org/ which describes how to do it with Qmail (and no doubt other mail software have their own guides). - Original Message - From: Robert Cates [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, 11 February, 2004 7:02 PM Subject: POP3 accounts Hi, I would like to get a job at a nearby ISP, and therefore I'm trying to learn more on various (technical) aspects of the business. So I would really appreciate it if somebody would explain to me how I for example can have as many as 50 or so POP3 accounts with an ISP, when I really have only one real/login account. I've really learned alot from my Debian 3.0 server I've setup, but some things just seem to elude me. Would it be in the POP server configuration, like Qpopper? Because I can't imagine the ISP will setup 50 seperate accounts for each POP3 account, and then that times 1000+ real customer accounts. Thank you very much for your help!! Robert -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: POP3 accounts
You might also want to check out: http://www.qmail.org/ and vpopmail Debian package. The basic idea is that you don't use real usernames that exist on the server, but instead create fake ones (such as a user called [EMAIL PROTECTED]) just for checking pop3 email. Do some reading... also check out http://www.lifewithqmail.org/ which describes how to do it with Qmail (and no doubt other mail software have their own guides). - Original Message - From: Robert Cates [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Wednesday, 11 February, 2004 7:02 PM Subject: POP3 accounts Hi, I would like to get a job at a nearby ISP, and therefore I'm trying to learn more on various (technical) aspects of the business. So I would really appreciate it if somebody would explain to me how I for example can have as many as 50 or so POP3 accounts with an ISP, when I really have only one real/login account. I've really learned alot from my Debian 3.0 server I've setup, but some things just seem to elude me. Would it be in the POP server configuration, like Qpopper? Because I can't imagine the ISP will setup 50 seperate accounts for each POP3 account, and then that times 1000+ real customer accounts. Thank you very much for your help!! Robert
Re: routing help
it basically cycles through the ip addresses pinging a host on just the other side of the router so it flushes the ARP cache. Does this sound correct or am I totally off the track here? Anyway it is all working now but I guess I'd like to know if what I had to do was correct or not? I believe there is a way to force a refresh or such of the ARP cache. Not sure how... but it can be done somehow. I'd be interested to learn the method under Linux as well, so if you find out, share it with the group :-) -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: routing help
it basically cycles through the ip addresses pinging a host on just the other side of the router so it flushes the ARP cache. Does this sound correct or am I totally off the track here? Anyway it is all working now but I guess I'd like to know if what I had to do was correct or not? I believe there is a way to force a refresh or such of the ARP cache. Not sure how... but it can be done somehow. I'd be interested to learn the method under Linux as well, so if you find out, share it with the group :-)
Re: shell access exploits (was Re: upgrading to MySQL 4 on woody)
One of my hats is a junior sys admin in an academic environment. I'm curious as to how you know when shell users are trying to exploit a kernel hole. chkrootkit?
Re: shell access exploits (was Re: upgrading to MySQL 4 on woody)
One of my hats is a junior sys admin in an academic environment. I'm curious as to how you know when shell users are trying to exploit a kernel hole. chkrootkit? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Attempts to poison bayesian systems
One technique that's being used a lot is to get books in electronic form and put a coupld of sentences in every spam (sentences from a book will pass gramatical checking etc, unlike the example you posted above). Also text from a book will have the right ratio of words, you will almost never find such a long sentence in an email message without a punctuation character, and, or, or other common words except in the case of source code (which is another category in bayesian filters). That won't work very well with Spamassassin, as it doesn't rely on bayesian filtering alone, and also uses header check and dnsbl checks. So you are correct... it does lower the bayesian score with these random legitimate sentences, but doesn't get them through completely unless you are using something like popfilter or such that only have bayesian filtering. And also note they can't only have these sentences in their emails... they must still have the catch line like increase pen1s size or something like that, and the bayesian filter will, over time, learn that all the other words are not as important as pen1s and these other words. So eventually it will work... at least that's my understanding of it. Feel free to improve or correct the above. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Attempts to poison bayesian systems
One technique that's being used a lot is to get books in electronic form and put a coupld of sentences in every spam (sentences from a book will pass gramatical checking etc, unlike the example you posted above). Also text from a book will have the right ratio of words, you will almost never find such a long sentence in an email message without a punctuation character, and, or, or other common words except in the case of source code (which is another category in bayesian filters). That won't work very well with Spamassassin, as it doesn't rely on bayesian filtering alone, and also uses header check and dnsbl checks. So you are correct... it does lower the bayesian score with these random legitimate sentences, but doesn't get them through completely unless you are using something like popfilter or such that only have bayesian filtering. And also note they can't only have these sentences in their emails... they must still have the catch line like increase pen1s size or something like that, and the bayesian filter will, over time, learn that all the other words are not as important as pen1s and these other words. So eventually it will work... at least that's my understanding of it. Feel free to improve or correct the above.
Re: Intel Hyperthreading problem on server?
I do not appear to be having the same problem you guys are. The machine does not have a high load, but has not exhibited any problems whatsoever. Running vanilla source 2.4.23 from kernel.org. Are you using Debian kernel packages or vanilla source? Any other magic going on? Possibly a bug in some other DSO you're using? Yeah, this may make sense. i do use some pretty heavy php modules (xslt and dom), but the reference deployment in non-smp does the exact same thing and does not crash. I am running just the standard Debian-compliled/included modules, except for mod_throttle (and i think even that may have been included?!) and mod_gzip (that too). Same thing... Apache hangs with SMP, works perfect for non-SMP kernel. I compile directly from kernel.org kernels only, and haven't used the Debian patched kernels in a long while. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
I do not appear to be having the same problem you guys are. The machine does not have a high load, but has not exhibited any problems whatsoever. Running vanilla source 2.4.23 from kernel.org. Are you using Debian kernel packages or vanilla source? Any other magic going on? Possibly a bug in some other DSO you're using? Yeah, this may make sense. i do use some pretty heavy php modules (xslt and dom), but the reference deployment in non-smp does the exact same thing and does not crash. I am running just the standard Debian-compliled/included modules, except for mod_throttle (and i think even that may have been included?!) and mod_gzip (that too). Same thing... Apache hangs with SMP, works perfect for non-SMP kernel. I compile directly from kernel.org kernels only, and haven't used the Debian patched kernels in a long while.
Intel Hyperthreading problem on server?
Hi All... Do you guys know anything about a problem with Intel Hyperthreading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps HyperThreading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Just noticed one more thing... it appears to be Apache causing the super high load (among other programs running) when SMP is compiled into the kernel, and with a bunch of errors in syslog: [Wed Dec 17 02:27:37 2003] [notice] child pid xx exit signal Segmentation fault (11) (and a whole bunch of these errors, like 50 lines) I did a search and someone said it has to do with Apache requesting memory that it doesn't own or something: http://lists.debian.org/debian-apache/2002/debian-apache-200207/msg5.html but that doesn't really help in this case, unless you guys can think of a different angle on this? - Original Message - From: Jason Lim [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, December 16, 2003 11:23 PM Subject: Intel Hyperthreading problem on server? Hi All... Do you guys know anything about a problem with Intel Hyperthreading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps HyperThreading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
El mar, 16-12-2003 a las 12:39, Jason Lim escribió: Just noticed one more thing... it appears to be Apache causing the super high load (among other programs running) when SMP is compiled into the kernel, and with a bunch of errors in syslog: [Wed Dec 17 02:27:37 2003] [notice] child pid xx exit signal Segmentation fault (11) (and a whole bunch of these errors, like 50 lines) I did a search and someone said it has to do with Apache requesting memory that it doesn't own or something: http://lists.debian.org/debian-apache/2002/debian-apache-200207/msg5.html Mhm... i dont want to be hasty, but it seems im looking at exactly this problem for a very memory hungry php application Except in my case, this error ONLY appears if SMP support is compiled into the kernel, otherwise, it runs smooth with very high load. Apache doesn't immediately have the problem with SMP compiled in tho... it takes maybe an hour or two before the problem appears. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
I was also considering the possibility of hardware error, but if it works 100% reliably without HT/SMP, but virtually crashes at high load with Apache, that would pretty much rule out hardware error, unless the CPU's HT is buggy (highly unlikely). Well, its not that the kernel does not detect the ht, it does and quite fine (shows lots of processors in the box and all). The problem is that apache is crashing under high load with a segfault. Now, as i understand it, this can be a faulty hardware problem (bad memory=segfault) or an actual software problem. Im not shure, but im having this problem as well with an HT server and have not been able to rule out the possibility of a faulty hardware thing. Nonetheless, this can also be, for example, an ugly module in woodies php4 which are particluarly edgy (xslt for example) under high load due to them being in beta stage by the time woody froze. El mar, 16-12-2003 a las 20:07, Theodore Knab escribió: I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
I just checked the kernel info for the memory support part: Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Hi, I just checked the kernel info for the memory support part: If you are compiling a kernel which will never run on a machine with more than 960 megabytes of total physical RAM, answer off here (defau choice and suitable for most users). This will result in a 3GB/1GB split: 3GB are mapped so that each process sees a 3GB virtual memory space and the remaining part of the 4GB virtual memory space is used by the kernel to permanently map as much physical memory as possible. If the machine has between 1 and 4 Gigabytes physical RAM, then answer 4GB here. I guess with 1.5Gb RAM you need to go with the 4Gb option... so that won't work :-( and having just 960M RAM wouldn't work either... Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED
Intel Hyperthreading problem on server?
Hi All... Do you guys know anything about a problem with Intel Hyperthreading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps HyperThreading processors? Thanks in advance. Sincerely, Jas
Re: Intel Hyperthreading problem on server?
Just noticed one more thing... it appears to be Apache causing the super high load (among other programs running) when SMP is compiled into the kernel, and with a bunch of errors in syslog: [Wed Dec 17 02:27:37 2003] [notice] child pid xx exit signal Segmentation fault (11) (and a whole bunch of these errors, like 50 lines) I did a search and someone said it has to do with Apache requesting memory that it doesn't own or something: http://lists.debian.org/debian-apache/2002/debian-apache-200207/msg5.html but that doesn't really help in this case, unless you guys can think of a different angle on this? - Original Message - From: Jason Lim [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Tuesday, December 16, 2003 11:23 PM Subject: Intel Hyperthreading problem on server? Hi All... Do you guys know anything about a problem with Intel Hyperthreading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps HyperThreading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
El mar, 16-12-2003 a las 12:39, Jason Lim escribió: Just noticed one more thing... it appears to be Apache causing the super high load (among other programs running) when SMP is compiled into the kernel, and with a bunch of errors in syslog: [Wed Dec 17 02:27:37 2003] [notice] child pid xx exit signal Segmentation fault (11) (and a whole bunch of these errors, like 50 lines) I did a search and someone said it has to do with Apache requesting memory that it doesn't own or something: http://lists.debian.org/debian-apache/2002/debian-apache-200207/msg5.html Mhm... i dont want to be hasty, but it seems im looking at exactly this problem for a very memory hungry php application Except in my case, this error ONLY appears if SMP support is compiled into the kernel, otherwise, it runs smooth with very high load. Apache doesn't immediately have the problem with SMP compiled in tho... it takes maybe an hour or two before the problem appears.
Re: Intel Hyperthreading problem on server?
Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
I was also considering the possibility of hardware error, but if it works 100% reliably without HT/SMP, but virtually crashes at high load with Apache, that would pretty much rule out hardware error, unless the CPU's HT is buggy (highly unlikely). Well, its not that the kernel does not detect the ht, it does and quite fine (shows lots of processors in the box and all). The problem is that apache is crashing under high load with a segfault. Now, as i understand it, this can be a faulty hardware problem (bad memory=segfault) or an actual software problem. Im not shure, but im having this problem as well with an HT server and have not been able to rule out the possibility of a faulty hardware thing. Nonetheless, this can also be, for example, an ugly module in woodies php4 which are particluarly edgy (xslt for example) under high load due to them being in beta stage by the time woody froze. El mar, 16-12-2003 a las 20:07, Theodore Knab escribió: I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
I just checked the kernel info for the memory support part: Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Intel Hyperthreading problem on server?
Hi, I just checked the kernel info for the memory support part: If you are compiling a kernel which will never run on a machine with more than 960 megabytes of total physical RAM, answer off here (defau choice and suitable for most users). This will result in a 3GB/1GB split: 3GB are mapped so that each process sees a 3GB virtual memory space and the remaining part of the 4GB virtual memory space is used by the kernel to permanently map as much physical memory as possible. If the machine has between 1 and 4 Gigabytes physical RAM, then answer 4GB here. I guess with 1.5Gb RAM you need to go with the 4Gb option... so that won't work :-( and having just 960M RAM wouldn't work either... Hi, Interesting info... especially the part: Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. The server has 1.5Gb RAM. I compiled it to have High Memory support (4Gb) because I don't know how much more RAM it may have added in the future. I suppose I could try going back as you suggested, but the Kernel info suggests that the 4Gb RAM High memory support *should* work for RAM less than that too :-/ Most frustrating. I will try re-compiling with your suggestion a bit later today, and let you know how it turns out. - Original Message - From: Theodore Knab [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Wednesday, December 17, 2003 10:07 AM Subject: Re: Intel Hyperthreading problem on server? I am using the 2.4.20 kernel with SMP support on a Hyper-threading Intel. I remember having problems getting it work with SMP support initially. I think the kernel has to be perfect. ;-) Do you have high memory support compiled in ? High memory support above 4GB might cause problems. If you do not have more than 2GB of RAM you should make sure that High memory support is not enabled. Also did you enable hyper-threading in BIOS ? Auto-detect modes might cause problems. http://publib-b.boulder.ibm.com/Redbooks.nsf/RedbookAbstracts/tips0175.html?Open My system: Linux tedsdesk 2.4.20 #22 SMP Mon Jul 21 14:53:07 EDT 2003 i686 GNU/Linux [EMAIL PROTECTED]:cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 15 model : 1 model name : Intel(R) Pentium(R) 4 CPU 1.50GHz stepping: 2 cpu MHz : 1495.172 cache size : 256 KB fdiv_bug: no hlt_bug : no f00f_bug: no coma_bug: no fpu : yes fpu_exception : yes cpuid level : 2 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm bogomips: 2981.88 The ht in the flags section tells me hyper threading is being recognized. On 16/12/03 23:23 +0800, Jason Lim wrote: Hi All... Do you guys know anything about a problem with Intel Hyper-threading (eg. on the Intel 2.4Ghz HT-enabled processor) that would cause the load average to jump to over 200? Here is the log line: Dec 16 22:48:17 be watchdog[250]: loadavg 203 101 40 is higher than the given threshold 200 150 100! (then it reboots) This happened on the 2.4.22 kernel, and now I tried it with the 2.4.23 kernel, and it has the same problem. When the kernel is compiled WITHOUT SMP support, the kernel works fine, and it can have uptimes of months without any problem. But when SMP is compiled in, and the HT processor is correctly identified (and top can see CPU0 and CPU1), then it only takes about an hour or two of operation before the load average jumps like that. Note that this is with Debian woody/stable, and with a clean kernel.org kernel. Do you guys know anything about this, or have any ideas where I should look? Is there something in Woody that isn't friendly with SMP or perhaps Hyper-Threading processors? Thanks in advance. Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- -- Ted Knab Chester, MD 21619 -- 35570707f6274702478656021626f6c6964796f6e602f66602478656 02e6164796f6e60237471647560216e6460276c6f62616c60257e696 4797e2a0 -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 3ware 8506-4 hangs while making filesystem
I would suggest you have a look at the 3dm log file in /var/log as this sounds like an issue in the communication between the linux disk io buffering subsystem and the 3ware card. However, since you're just performing the installation, i doubt you can load up 3dmd during that time (can you?). - Original Message - From: Andrew Miehs [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Saturday, 13 December, 2003 6:47 AM Subject: Re: 3ware 8506-4 hangs while making filesystem I have an 8 port 8506-8 (no raid) and 3 250GB Disks running fine on a debian bf24 box. Have formated each disk with ext3 and 1 partition. Have a RAID 5 setup running on SUSE 9 with 600GB Space, and also no problems. Andrew I'm installing a new system (P4C, intel mainboard, 1G RAM, 4x160GB SATA, 3ware 8506-4, raid 5) and the system freezes completely while formatting a big partition (unable to handle kernel paging request... means absolutely nothing to me). kernel BUG at buffer.c 559! invalid operand: CPU: 0 Anyone experienced this with a 3ware card? thanks, tinus redhat uses a 2.4.20 kernel (uname gives 2.4.20-8BOOT) and for woody I used bf24. filesystem does not seem to matter, I tried both ext2 and ext3. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: 3ware 8506-4 hangs while making filesystem
I would suggest you have a look at the 3dm log file in /var/log as this sounds like an issue in the communication between the linux disk io buffering subsystem and the 3ware card. However, since you're just performing the installation, i doubt you can load up 3dmd during that time (can you?). - Original Message - From: Andrew Miehs [EMAIL PROTECTED] To: [EMAIL PROTECTED] Cc: debian-isp@lists.debian.org Sent: Saturday, 13 December, 2003 6:47 AM Subject: Re: 3ware 8506-4 hangs while making filesystem I have an 8 port 8506-8 (no raid) and 3 250GB Disks running fine on a debian bf24 box. Have formated each disk with ext3 and 1 partition. Have a RAID 5 setup running on SUSE 9 with 600GB Space, and also no problems. Andrew I'm installing a new system (P4C, intel mainboard, 1G RAM, 4x160GB SATA, 3ware 8506-4, raid 5) and the system freezes completely while formatting a big partition (unable to handle kernel paging request... means absolutely nothing to me). kernel BUG at buffer.c 559! invalid operand: CPU: 0 Anyone experienced this with a 3ware card? thanks, tinus redhat uses a 2.4.20 kernel (uname gives 2.4.20-8BOOT) and for woody I used bf24. filesystem does not seem to matter, I tried both ext2 and ext3. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: duplicating servers - remote backup to HD
Any good way to get around Qmail's usage of inode # as file names? I've tried doing a simple cp before and it just doesn't work afterwards... doens't see the files. I've seen hacks, but they don't seem to work well and take forever to run, which can be tough, especially if you have hundreds, if not thousands, of accounts, each possibly with a hundred emails in each... - Original Message - From: W.D.McKinney [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sunday, December 07, 2003 02:05 PM Subject: Re: duplicating servers - remote backup to HD On Sat, 2003-12-06 at 14:23, George Georgalis wrote: On Sat, Dec 06, 2003 at 01:33:32PM -0900, W.D.McKinney wrote: Hello, I'd like to backup a couple of Debian Woody servers remotely to my a Storage array that I was given recently. The servers are are at a local colo and I nad a xDSL connection provided by the ISP that serves the Colo so that's good. I am thinking that someone might have an rysnc script that are using like this ? Is there one available any where ? Sure, here's what I use for taking an image of a system. If you plan to restore from your backup don't exclude your hostname, ssh host keys, etc. You do want to exclude /proc and any NFS etc though. And don't forget '--numeric-ids' as the specific numbers are referenced in /etc/{passwd,group} rsync -av --progress --delete-excluded --numeric-ids \ --exclude=**/cdrom/* \ --exclude=**/etc/hostname \ --exclude=**/etc/mtab \ --exclude=**/etc/network/interfaces \ --exclude=**/floppy/* \ --exclude=**/var/lock/* \ --exclude=.bash_history \ --exclude=.viminfo \ --exclude=/.ssh/id* \ --exclude=/etc/**/[EMAIL PROTECTED] \ --exclude=/etc/**/current \ --exclude=/etc/ssh/ssh_host_dsa_key \ --exclude=/etc/ssh/ssh_host_dsa_key.pub \ --exclude=/etc/ssh/ssh_host_rsa_key \ --exclude=/etc/ssh/ssh_host_rsa_key.pub \ --exclude=/supervise/status \ --exclude=/tmp/* \ --exclude=/var/backups/*gz \ --exclude=/var/log/**/[EMAIL PROTECTED] \ --exclude=/var/log/**/current \ --exclude=/var/log/dmesg \ --exclude=/var/run/*pid \ --exclude=/var/tmp/* \ --exclude=dhclient.leases \ --exclude=dhcpd.leases \ --exclude=known_hosts \ --exclude=locatedb \ --exclude=ntp.drift \ --exclude=proc/* \ --exclude=random-seed \ --exclude=utmp \ --exclude=wtmp \ $src $dest you'll need -essh and root on both sides to read/create all the uids. Caveat emperor and you may still have some problems with daemontools control files being included... Hi George, Hey thanks I will try this as well. Good to hear from twice in a week :-) Dee -- Alaska Wireless Systems http://www.akwireless.net -=- Take Control of Your E-Mail! (907)349-4308 Office - AIM = awswired -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: duplicating servers - remote backup to HD
Any good way to get around Qmail's usage of inode # as file names? I've tried doing a simple cp before and it just doesn't work afterwards... doens't see the files. I've seen hacks, but they don't seem to work well and take forever to run, which can be tough, especially if you have hundreds, if not thousands, of accounts, each possibly with a hundred emails in each... - Original Message - From: W.D.McKinney [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Sunday, December 07, 2003 02:05 PM Subject: Re: duplicating servers - remote backup to HD On Sat, 2003-12-06 at 14:23, George Georgalis wrote: On Sat, Dec 06, 2003 at 01:33:32PM -0900, W.D.McKinney wrote: Hello, I'd like to backup a couple of Debian Woody servers remotely to my a Storage array that I was given recently. The servers are are at a local colo and I nad a xDSL connection provided by the ISP that serves the Colo so that's good. I am thinking that someone might have an rysnc script that are using like this ? Is there one available any where ? Sure, here's what I use for taking an image of a system. If you plan to restore from your backup don't exclude your hostname, ssh host keys, etc. You do want to exclude /proc and any NFS etc though. And don't forget '--numeric-ids' as the specific numbers are referenced in /etc/{passwd,group} rsync -av --progress --delete-excluded --numeric-ids \ --exclude=**/cdrom/* \ --exclude=**/etc/hostname \ --exclude=**/etc/mtab \ --exclude=**/etc/network/interfaces \ --exclude=**/floppy/* \ --exclude=**/var/lock/* \ --exclude=.bash_history \ --exclude=.viminfo \ --exclude=/.ssh/id* \ --exclude=/etc/**/[EMAIL PROTECTED] \ --exclude=/etc/**/current \ --exclude=/etc/ssh/ssh_host_dsa_key \ --exclude=/etc/ssh/ssh_host_dsa_key.pub \ --exclude=/etc/ssh/ssh_host_rsa_key \ --exclude=/etc/ssh/ssh_host_rsa_key.pub \ --exclude=/supervise/status \ --exclude=/tmp/* \ --exclude=/var/backups/*gz \ --exclude=/var/log/**/[EMAIL PROTECTED] \ --exclude=/var/log/**/current \ --exclude=/var/log/dmesg \ --exclude=/var/run/*pid \ --exclude=/var/tmp/* \ --exclude=dhclient.leases \ --exclude=dhcpd.leases \ --exclude=known_hosts \ --exclude=locatedb \ --exclude=ntp.drift \ --exclude=proc/* \ --exclude=random-seed \ --exclude=utmp \ --exclude=wtmp \ $src $dest you'll need -essh and root on both sides to read/create all the uids. Caveat emperor and you may still have some problems with daemontools control files being included... Hi George, Hey thanks I will try this as well. Good to hear from twice in a week :-) Dee -- Alaska Wireless Systems http://www.akwireless.net -=- Take Control of Your E-Mail! (907)349-4308 Office - AIM = awswired -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Strange problem with NIC
We've run Realtek cards on some servers, and they've worked flawlessly for us. We never pushed them to the absolute max, but at one point they were pushing about 50Mbps (far for the theoretical 100Mbps... but you'll never get that anyway). - Original Message - From: [EMAIL PROTECTED] To: Roman Medina [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, November 26, 2003 02:26 PM Subject: Re: Strange problem with NIC is it Realtech card? if so go get 3com/Intel On Sun, 23 Nov 2003, Roman Medina wrote: Hi, I'm experimenting the following problem: one Debian machine with 1 10/100 Ethernet NIC where its upstream speed is reasonable (2 or 3 Mbytes per second) but its downstream speed is awful (35 kbytes per second ). All experiments are made in a LAN, so I cannot explain the 35 kbytes/s extremely low speed. Any idea? TIA Saludos, --Roman -- PGP Fingerprint: 09BB EFCD 21ED 4E79 25FB 29E1 E47F 8A7D EAD5 6742 [Key ID: 0xEAD56742. Available at KeyServ] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Strange problem with NIC
Run mii-tool and see what speed your card is using first. - Original Message - From: Roman Medina [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sunday, November 23, 2003 05:49 PM Subject: Strange problem with NIC Hi, I'm experimenting the following problem: one Debian machine with 1 10/100 Ethernet NIC where its upstream speed is reasonable (2 or 3 Mbytes per second) but its downstream speed is awful (35 kbytes per second ). All experiments are made in a LAN, so I cannot explain the 35 kbytes/s extremely low speed. Any idea? TIA Saludos, --Roman -- PGP Fingerprint: 09BB EFCD 21ED 4E79 25FB 29E1 E47F 8A7D EAD5 6742 [Key ID: 0xEAD56742. Available at KeyServ] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Strange problem with NIC
Run mii-tool and see what speed your card is using first. - Original Message - From: Roman Medina [EMAIL PROTECTED] To: debian-isp@lists.debian.org Sent: Sunday, November 23, 2003 05:49 PM Subject: Strange problem with NIC Hi, I'm experimenting the following problem: one Debian machine with 1 10/100 Ethernet NIC where its upstream speed is reasonable (2 or 3 Mbytes per second) but its downstream speed is awful (35 kbytes per second ). All experiments are made in a LAN, so I cannot explain the 35 kbytes/s extremely low speed. Any idea? TIA Saludos, --Roman -- PGP Fingerprint: 09BB EFCD 21ED 4E79 25FB 29E1 E47F 8A7D EAD5 6742 [Key ID: 0xEAD56742. Available at KeyServ] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Cat 3 cabling
Hi all, I was wondering... what is physically different between Cat 3 (10BaseTX) and Cat 5 cabling (100BaseTX and better)? Does Cat 3 cabling have less wires or something? Besides looking for text written on the cable, is there any way to know which is which? Hope someone knows the answer to this, as I've never actually seen Cat 3 ;-) Sincerely, Jas
Re: Cat 3 cabling
Cat 3 cable is the quality of 4-pair wiring used for voice connections between PBXs and analog telephones. Turns out, it is 'good enoug' for 10 M/s Ethernet (10BaseT) but not good enough for 100 M/s or GigEnet. Cat 5 cable is also 4-pairs, but the manufacturing process is more precise (pitch of the twists, different for each pair; wire gauge; insulation thickness; etc.). As a result, the Cat 5 impedance is more uniform and produces lower signal losses. The better impedance matching carries over into the connectors, which are newer designs (almost all IDC, more precise punch-down blocks) than the Cat 3 (screw posts and relatively sloppy 66 punch-downs. Bill So in essense, since they are both 4-pairs, just looking at it won't let you know which it is (without actually testing it)? Any way to turn Cat 5 into Cat 3, and vice versa? Thanks.
Re: Problem with rare cases where browser seems to use HTTP 1.0 instead of 1.1
Given that you stated that two clients on different ISPs have the same problem at the same time it seems to eliminate the possibility of a proxy. The chance of two windows machines independently having the same bug at the same time seems rather low, so it seems likely to be the server at fault. I suggest writing a script to use wget (or some similar tool) to repeatedly get the page in question and archive the results. Try and reproduce it on a Linux machine. In addition to Russell's suggestion, try disabling the various mod_* in httpd.conf, as it has been known that some that fiddle with the headers and such can mess things up. So try to disable most, if not all, then enable half, etc. until you find the module that might be causing this. And disable the bit about it making exceptions for IE keepalive... just comment it all out, and see how it goes. Just eliminate everything, and slowly re-enable stuff. That might be it. Also keep in mind some dumb ISPs claim to give you a straight connection, but are secretly proxying things without your knowledge. Check theheaders and see if it passes through any proxies. Try putting http on a different port like 1122 (random number) and see if it works, as proxies almost always attach to port 80. Hope that helps. Jas
Re: Moving Sites
Sincerely, - Original Message - From: Maarten Vink / Interstroom [EMAIL PROTECTED] To: Tarragon Allen [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Tuesday, 21 October, 2003 3:31 PM Subject: Re: Moving Sites Tarragon Allen wrote: On Tuesday 21 October 2003 13:43, Rod Rodolico wrote: Guess is boils down to this. When I update the address of mail.dailydata.net, it can take up to 72 hours for that change to perculate throughout the net, so I'm assuming some places will still try to send to the old IP and, if I leave that box on, be delivered to it. If I turn the other box off, I'm assuming they will bounce. No they won't bounce; most mailservers will leave messages in their queues for up to 5 days when your machine is down. If you lower the TTL for mail.dailydata.net it shouldn't take 72 hours either. Put the IP address of the old site on the new mail server when you bring down the old one, and then change your DNS entry, wait three days, then drop the old IP address. Alternatively, set up a redirector on the old mail server to forward traffic to the new mail server (using 'redir' or something similar). Or even easier: assuming the machines are in the same subnet, why not add the IP address of the old server to the new one, on eth0:1 or any other alias for your primary NIC? Both traffic to the old and the new IP will end up on the right server, and you can easily back out if there is a problem by removing the alias. Or as a third solution, you could have the old server/IP forward mail to the new server/IP (basically relay mail the new server/IP) and since your new server is authoritive, it will pick up the mail. No loss. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: SSH access restrictions
To sumerize the options I've found so far: a) PAM chroot b) rbash - restricted shell c) SSH2 chroot access. In this case the machine in question is a remote virtual server with only SSH access. So I think c) may be the go. If I had local users I guess a) or b) with a) having stronger security. Did you try c) already? Did it work effectively? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Problem with rare cases where browser seems to use HTTP 1.0 instead of 1.1
Given that you stated that two clients on different ISPs have the same problem at the same time it seems to eliminate the possibility of a proxy. The chance of two windows machines independently having the same bug at the same time seems rather low, so it seems likely to be the server at fault. I suggest writing a script to use wget (or some similar tool) to repeatedly get the page in question and archive the results. Try and reproduce it on a Linux machine. In addition to Russell's suggestion, try disabling the various mod_* in httpd.conf, as it has been known that some that fiddle with the headers and such can mess things up. So try to disable most, if not all, then enable half, etc. until you find the module that might be causing this. And disable the bit about it making exceptions for IE keepalive... just comment it all out, and see how it goes. Just eliminate everything, and slowly re-enable stuff. That might be it. Also keep in mind some dumb ISPs claim to give you a straight connection, but are secretly proxying things without your knowledge. Check theheaders and see if it passes through any proxies. Try putting http on a different port like 1122 (random number) and see if it works, as proxies almost always attach to port 80. Hope that helps. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Moving Sites
Sincerely, - Original Message - From: Maarten Vink / Interstroom [EMAIL PROTECTED] To: Tarragon Allen [EMAIL PROTECTED] Cc: debian-isp@lists.debian.org Sent: Tuesday, 21 October, 2003 3:31 PM Subject: Re: Moving Sites Tarragon Allen wrote: On Tuesday 21 October 2003 13:43, Rod Rodolico wrote: Guess is boils down to this. When I update the address of mail.dailydata.net, it can take up to 72 hours for that change to perculate throughout the net, so I'm assuming some places will still try to send to the old IP and, if I leave that box on, be delivered to it. If I turn the other box off, I'm assuming they will bounce. No they won't bounce; most mailservers will leave messages in their queues for up to 5 days when your machine is down. If you lower the TTL for mail.dailydata.net it shouldn't take 72 hours either. Put the IP address of the old site on the new mail server when you bring down the old one, and then change your DNS entry, wait three days, then drop the old IP address. Alternatively, set up a redirector on the old mail server to forward traffic to the new mail server (using 'redir' or something similar). Or even easier: assuming the machines are in the same subnet, why not add the IP address of the old server to the new one, on eth0:1 or any other alias for your primary NIC? Both traffic to the old and the new IP will end up on the right server, and you can easily back out if there is a problem by removing the alias. Or as a third solution, you could have the old server/IP forward mail to the new server/IP (basically relay mail the new server/IP) and since your new server is authoritive, it will pick up the mail. No loss. Jas
Re: SSH access restrictions
To sumerize the options I've found so far: a) PAM chroot b) rbash - restricted shell c) SSH2 chroot access. In this case the machine in question is a remote virtual server with only SSH access. So I think c) may be the go. If I had local users I guess a) or b) with a) having stronger security. Did you try c) already? Did it work effectively?
Re: SSH access restrictions
Hi, Just a quick question on libpam-chroot. This package is not availalbe in 'stable'. I've only ever used 'stable'. It should be OK to grab this package from 'testing' and use it hey ? Usually you can't... as they have dependency problems. What you need is a backport to stable... search on Google for one (http://www.apt-get.org/ is one) and see if anyone has a backport for it. Hopefully they do... I'd be interested in CHROOT as well. I've heard of something called jailshell as offered on some control panels like cPanel, but not sure what it actually is. So I know its possible... just haven't found a reliable way. Advanced users can probably figure out ways to break out of the jail, but at least it helps a bit. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: musirc4.71.exe - firewall question
Anyway, I blocked it from connecting and I am trying to delete the file. I succeded and even put it in quarentine - but it keeps recreating itself. How can I get rid of it - or find the source that is recreating it? This is HIGHLY offtopic to this group, but anyway... Sounds like a virus... especially as you yourself said it is mysteriously re-creating itself. Get an anti-virus software... i like Etrust EZ antivirus, but YMMV. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
No, I don't think this would work. You'll need a third box which will do the balancing (well, maybe you could get it to work but it's not intended this way). As I said before, the balancer doesn't have to be a fast machine - almost anything you can find will be sufficient. Strangely enough, you might find FreeBSD (or one of the BSDs) working better as the forwarded than Linux, due to it's better ability to handle many multiple concurrent connections. YMMV of course. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Web administration Apache - Virtual domains
- Original Message - From: Matias G. Lambert ( OSInet ) [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Tuesday, 09 September, 2003 11:10 PM Subject: Web administration Apache - Virtual domains Hi, I'm looking for an apache virtual domain web admin tool. Does anyone know a open source solution for that? And anyone know complete solution for a ISP? ( vpopmail, apache, proftp, tinydns ) Thanks The only free one I know about is Webmin. Other for-pay ones are Cpanel, Hsphere, and quite a few others. Google is your friend. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: On SMP, getting: Message from watchdog: The system will be rebooted because of error -3!
From: Russell Coker [EMAIL PROTECTED] To: Jason Lim [EMAIL PROTECTED]; [EMAIL PROTECTED] Sent: Monday, 08 September, 2003 1:26 PM Subject: Re: On SMP, getting: Message from watchdog: The system will be rebooted because of error -3! On Mon, 8 Sep 2003 15:09, Jason Lim wrote: Recently got SMP working, but now keep getting: Message from watchdog: The system will be rebooted because of error -3! Check /var/log/daemon.log for the real reason, a transient load spike is a likely cause. Right... you're were right. Sep 8 12:31:18 beta watchdog[243]: loadavg 159 63 24 is higher than the given t hreshold 150 140 130! Sep 8 12:31:28 beta watchdog[243]: shutting down the system because of error -3 I had set the loadavg to such an absurd number, I never thought it could be that. It NEVER peaks that high on a single CPU (well... without HT SMP on). Is this normal? Do SMP systems tend to spike a lot higher than regular single CPU ones? Strange thing is... the previous 2Ghz CPU... never went that high... and now with a 2.8Ghz HyperThreading processing, the load average actually increases (or at least the spiking load average). Is this a trait of SMP? Never worked with SMP like this before... with such strange charateristic? Normal? Thanks in advance. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: SMP on Debian server with Hyperthreading
Sincerely, - Original Message - From: Guus Houtzager [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, 06 September, 2003 6:07 PM Subject: Re: SMP on Debian server with Hyperthreading Hi, On Sat, 2003-09-06 at 09:31, Jason Lim wrote: Hi Guus, Yes, BIOS setting is enabled. The ONLY thing that I haven't done is edit lilo to include the acpismp=force setting. Did you set that to make it work? Does it work without it (ie. SMP enabled WITHOUT modifying lilo)? Haven't modified lilo for this to work. I looked again at your dmesg output and compared it to mine. With me it's ACPI doing the detection of the cpu's. Did you enable ACPI? Ah... did NOT have ACPI enabled. Did not know it had to be, for SMP to work! I enabled it, compiled the kernel, and /proc/cpuinfo sees 2 CPUs, but top doesn't. After spending ages trying to figure it out, I found the version of top that comes with Debian 3.0 does NOT have SMP support. So I downloaded the backported version from: http://people.debian.org/~nobse/debian/woody/backported/procps/ and after top launches, press 1 (or set the appropriate cmd line parameter) and it works great. I did ACPI a little different than you though... I have: # CONFIG_ACPI_HT_ONLY is not set set as y/1 (and therefore everything else set as n/0), as I don't need all the acpi_power and other stuff, just want SMP to work. So it's working now. Thanks for your help! Part of my dmesg: Linux version 2.4.22 ([EMAIL PROTECTED]) (gcc version 3.3.2 20030831 (Debian prerelease)) #1 SMP Tue Sep 2 11:37:18 CEST 2003 BIOS-provided physical RAM map: BIOS-e820: - 000a (usable) BIOS-e820: 000f - 0010 (reserved) BIOS-e820: 0010 - 2fff (usable) BIOS-e820: 2fff - 2fff3000 (ACPI NVS) BIOS-e820: 2fff3000 - 3000 (ACPI data) BIOS-e820: fec0 - 0001 (reserved) 767MB LOWMEM available. ACPI: have wakeup address 0xc0002000 found SMP MP-table at 000f51c0 hm, page 000f5000 reserved twice. hm, page 000f6000 reserved twice. hm, page 000f reserved twice. hm, page 000f1000 reserved twice. On node 0 totalpages: 196592 zone(0): 4096 pages. zone(1): 192496 pages. zone(2): 0 pages. ACPI: RSDP (v000 IntelR) @ 0x000f6c50 ACPI: RSDT (v001 IntelR AWRDACPI 0x42302e31 AWRD 0x) @ 0x2fff3000 ACPI: FADT (v001 IntelR AWRDACPI 0x42302e31 AWRD 0x) @ 0x2fff3040 ACPI: MADT (v001 IntelR AWRDACPI 0x42302e31 AWRD 0x) @ 0x2fff6700 ACPI: DSDT (v001 INTELR AWRDACPI 0x1000 MSFT 0x010d) @ 0x ACPI: Local APIC address 0xfee0 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 Pentium 4(tm) XEON(tm) APIC version 16 ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Processor #1 Pentium 4(tm) XEON(tm) APIC version 16 ACPI: LAPIC_NMI (acpi_id[0x00] polarity[0x1] trigger[0x1] lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x00] polarity[0x1] trigger[0x1] lint[0x1]) ACPI: IOAPIC (id[0x02] address[0xfec0] global_irq_base[0x0]) IOAPIC[0]: Assigned apic_id 2 IOAPIC[0]: apic_id 2, version 32, address 0xfec0, IRQ 0-23 ACPI: INT_SRC_OVR (bus[0] irq[0x0] global_irq[0x2] polarity[0x0] trigger[0x0]) ACPI: INT_SRC_OVR (bus[0] irq[0x9] global_irq[0x9] polarity[0x1] trigger[0x3]) Using ACPI (MADT) for SMP configuration information Kernel command line: auto BOOT_IMAGE=Linux ro root=303 hdc=scsi and so on... ACPI part of my config: # # ACPI Support # CONFIG_ACPI=y # CONFIG_ACPI_HT_ONLY is not set CONFIG_ACPI_BOOT=y CONFIG_ACPI_BUS=y CONFIG_ACPI_INTERPRETER=y CONFIG_ACPI_EC=y CONFIG_ACPI_POWER=y CONFIG_ACPI_PCI=y CONFIG_ACPI_SLEEP=y CONFIG_ACPI_SYSTEM=y # CONFIG_ACPI_AC is not set # CONFIG_ACPI_BATTERY is not set CONFIG_ACPI_BUTTON=y CONFIG_ACPI_FAN=y CONFIG_ACPI_PROCESSOR=y # CONFIG_ACPI_THERMAL is not set # CONFIG_ACPI_ASUS is not set # CONFIG_ACPI_TOSHIBA is not set # CONFIG_ACPI_DEBUG is not set # CONFIG_ACPI_RELAXED_AML is not set Thanks. I hope this helps. -- Guus Houtzager Email: [EMAIL PROTECTED] PGP fingerprint = 5E E6 96 35 F0 64 34 14 CC 03 2B 36 71 FB 4B 5D A)bort, R)etry, I)nfluence with large hammer. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
On SMP, getting: Message from watchdog: The system will be rebooted because of error -3!
Hi all, Recently got SMP working, but now keep getting: Message from watchdog: The system will be rebooted because of error -3! (note this isn't really SMP, it's intel hyperthreading...) The system auto reboots because of this. Not sure why... doesn't appear to be the load or anything (no conditions met from /etc/watchdog.conf) Any idea what this might be? Sincerely, Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sendmail or Qmail ? ..
- Original Message - From: Cameron L. Spitzer [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Sunday, 07 September, 2003 12:19 AM Subject: Re: Sendmail or Qmail ? .. I've been running Qmail since '98. It's got a bottleneck in disk writes, but aside from that it's fast. (Anybody tried running the queue in a ramdisk? Howabout in an fs made in a file mounted looback?) It's secure and reliable. Unfortunately, it's not being maintained by its author. If you want the functionality of a modern MTA, you need to wade through a disorganized and unverifiable swamp of contributed patches and add-ons. I'm sure most of the add-ons are great, if you can figure out where to get them and how to use them. But the ones I've tried (mjinject and a couple of SMTP AUTH's) were broken, and unsupported by *their* authors. I'm not going to ask hundreds of users to rely on a cobbled-together mess like that. Apologies and respects to Dave Sill. Of course, it is also the very fact that Qmail does not offer all the bells and whistles that it is also among the most secure MTA available. This does not mean Exim and others are not secure, but natural thinking dictates that given the same security model, one with lots of extra features will be less secure. I use Qmail without any extra patches, and also have Spamassassin installed and integrated with it, and don't have any problem. I use smtp-after-pop, so don't have the SMTP AUTH patches installed, but some of the patches are integrated well into Qmail. So I've given up on Qmail. I'm using Exim for small systems, and I'll try Postfix for my next big one. I've heard good things about Postfix, but as Qmail does basically what I need, and since I don't need all the advanced features, I'm staying with something secure and reliable, unless something I does requires something different. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sendmail or Qmail ? ..
On Sun, 7 Sep 2003 02:19, Cameron L. Spitzer wrote: I've been running Qmail since '98. It's got a bottleneck in disk writes, but aside from that it's fast. (Anybody tried running the queue in a ramdisk? Running the queue on a ramdisk would kill reliability. Indeed, been there done that. In fact, something I wrote a long while ago about how to increase Qmail's performance greatly (splitting the queues onto two different hard disks/spindles) made it into Debian Weekly news or something. Search Google or the mail list archives for more info on that. And if it is going to be primarily an outgoing mail server, putting it on a Ramdisk makes it deadly fast, but as Russell said... would lose those emails if it suddenly crashed. Using a non-volatile RAM device however will significantly increase performance without risk. Umem devices seem a good option for this, their recent devices are PCI 2.2 - 64bit 66MHz and claim to sustain over 500MB/s transfer rates with no seeks, I am not sure about Linux device driver support for that, but the old versions worked well from all accounts. If you put your queue on a Umem device you should get all the performance of a RAM disk with all the reliability of a RAID hard drive device (better reliability than a hard drive as there are no moving parts). http://www.micromemory.com/newwebsite/Dynamic/index.asp Howabout in an fs made in a file mounted looback?) What would be the benefit of a FS in a loopback mounted file? That should kill performance and reliability at the same time. Mmm... one of the limitations of Qmail is that it creates many many individual files (one for each email) and due to filesystem limitations, EXT2/3 starts slowing to a crawl. Of course, another way would be to use ReiserFS, but wouldn't doing a FS in a loopback mounted file resolve at least that? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Postfix! [WAS: Re: Sendmail or Qmail ? ..]
Please people, What is the connection between the nationality of Wietse Venema and people who sent spam? This is a very strange argument and more fitted for a discussion between kids. We are adults, we are professionals, this list is to discuss technicall matters (personal opinions allowed). Please keep up the high standard of this list! Thank you Brian Hear hear! Nationality doesn't matter. We're talking about technical merit of things here. Let's keep race, creed, religion, colour out of this. You should follow nanae more often on usenet and you will know that `spammers' mostly moved away from a2000.nl/chello.nl thanks to Marcel his actions. And you don't clean a network with over 300k of customers overnight, but even SPEWS is seeing changes. Don't mention SPEWS. SPEWS is famous for blocking large non-USA ISPs at the drop of a hat, while large USA spam-support ISPs get away with murder. Why? Because Spews is either run by someone in the USA or knows that if they started applying the same principals to everyone, more and more large USA ISPs will be blocked completely, and less and less people will use SPEWS. Thus SPEWS has double-standards in this regard. I prefer ones that have the same standard, regardless of what country you are in. Many many block lists are available... www.spamcop.net... or just check out one of the best Block List comparisons yourself at: http://www.declude.com/JunkMail/Support/ip4r.htm Also another thing, if I may believe statistics from people running spamikaze[1] is the US currently nummero uno in there blacklists counted by blocked IP-address. Even .tw, .cn and .kr are just minor issues compared to the US. Don't tell SPEWS and NANAE that... from the way they talk and act, every spammer must be in China, Korea, Taiwan, and everywhere else EXCEPT the USA. Maybe also nice to know is that there is a foundation[2] in the Netherlands that fights against Dutch-companies that send people bulk e-mail to addresses that are not collected with confirmed opt-in. In the above block list comparison webpage, I believe it is listed there? -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
SMP on Debian server with Hyperthreading
Hi all, Just wondering... I've got a 2.4Ghz Hyperthreading (100% it is the hyperthreading model), and the BIOS sees it. I then compiled the kernel... the usual, except added the SMP support setting Symmetric multi-processing support. Nothing else was changed. Compiled it, liloed it... it's running it: # uname -a Linux megalith 2.4.22 #8 SMP Thu Aug 28 14:44:13 HKT 2003 i686 unknown However, # mpstat -P Not an SMP machine... And in top i don't see the multiple CPU usage this is all strange. For Linux, aren't Hyperthreading CPUs suppose to act like completely separate independent CPUs (this was suppose to change in 2.6... but for 2.4, they can't tell the difference, right?). Hope you can advise... as hyperthreading is there but not being used, which is a waste and could add performance. Thanks in advance! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: SMP on Debian server with Hyperthreading
Just thought I'd add a bit more info... from dmesg: Linux version 2.4.22 ([EMAIL PROTECTED]) (gcc version 2.95.4 20011002 (Debian prerelease )) #8 SMP Thu Aug 28 14:44:13 HKT 2003 BIOS-provided physical RAM map: BIOS-e820: - 0009fc00 (usable) BIOS-e820: 0009fc00 - 000a (reserved) BIOS-e820: 000e6000 - 0010 (reserved) BIOS-e820: 0010 - 3f73 (usable) BIOS-e820: 3f73 - 3f74 (ACPI data) BIOS-e820: 3f74 - 3f7f (ACPI NVS) BIOS-e820: 3f7f - 3f80 (reserved) BIOS-e820: fecf - fecf1000 (reserved) BIOS-e820: fed2 - feda (reserved) 119MB HIGHMEM available. 896MB LOWMEM available. found SMP MP-table at 000ff780 hm, page 000ff000 reserved twice. hm, page 0010 reserved twice. hm, page 000fc000 reserved twice. hm, page 000fd000 reserved twice. On node 0 totalpages: 259888 zone(0): 4096 pages. zone(1): 225280 pages. zone(2): 30512 pages. Intel MultiProcessor Specification v1.4 Virtual Wire compatibility mode. OEM ID: Product ID: Springdale-G APIC at: 0xFEE0 Processor #0 Pentium 4(tm) XEON(tm) APIC version 20 I/O APIC #2 Version 32 at 0xFEC0. Enabling APIC mode: Flat. Using 1 I/O APICs Processors: 1 Kernel command line: auto BOOT_IMAGE=Linux ro root=801 Initializing CPU#0 Seems it KNOWS it is SMP... but then it detects only 1 processor? Is this how hyperthreading works? - Original Message - From: Jason Lim [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Saturday, 06 September, 2003 1:06 AM Subject: SMP on Debian server with Hyperthreading Hi all, Just wondering... I've got a 2.4Ghz Hyperthreading (100% it is the hyperthreading model), and the BIOS sees it. I then compiled the kernel... the usual, except added the SMP support setting Symmetric multi-processing support. Nothing else was changed. Compiled it, liloed it... it's running it: # uname -a Linux megalith 2.4.22 #8 SMP Thu Aug 28 14:44:13 HKT 2003 i686 unknown However, # mpstat -P Not an SMP machine... And in top i don't see the multiple CPU usage this is all strange. For Linux, aren't Hyperthreading CPUs suppose to act like completely separate independent CPUs (this was suppose to change in 2.6... but for 2.4, they can't tell the difference, right?). Hope you can advise... as hyperthreading is there but not being used, which is a waste and could add performance. Thanks in advance! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Debian Co-location in USA
- Original Message - From: Craig Sanders [EMAIL PROTECTED] To: Jeremy Lunn [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: Wednesday, July 16, 2003 9:36 AM Subject: Re: Debian Co-location in USA On Tue, Jul 15, 2003 at 01:27:04PM +1000, Jeremy Lunn wrote: The level of support probably doesn't matter, as long as the basics are provided. They are currently looking at Candid Hosting (http://www.candidhosting .com/CGI/Dedicated.asp), however they only provide Red Hat and FreeBSD. So any plan would have to be equal to or better than the Candid $130/month deal. you might want to be careful about candidhosting .com. this thread has been triggering my SpamAssassin rules...investigation shows that i have candidhosting .com listed in both my postfix access maps and my local SA rules (which means that i have been spammed from them in the past). searching on openrbl.org, google groups, and senderbase indicates that candidhosting are still an active spam source, with their very own SPEWS entry: http://spews.org/html/S339.html as with any RBL listing, take it with a grain of salt and do your own research. especially Spews listings... notorious for being out-of-date and such. Seem so many entries in Spews before that were non-existant, but I suppose that is what comes with running a manual, text-based list. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: cgi-bin directory under home/user/public_html
- Original Message - From: Keith G. Murphy [EMAIL PROTECTED] To: DEBIAN debian-isp [EMAIL PROTECTED] Sent: 15 July, 2003 12:41 AM Subject: Re: cgi-bin directory under home/user/public_html Jason Lim wrote: - Original Message - From: Nestor R. Mazza [EMAIL PROTECTED] To: DEBIAN debian-isp [EMAIL PROTECTED] Sent: 13 July, 2003 11:06 PM Subject: cgi-bin directory under home/user/public_html Hi My server is Debian Woody 3.0r1 Apache/1.3.26 (Unix) Debian GNU/Linux PHP/4.1.2 mod_perl/1.26 All the scripts works fine under original directories but now I want to put the user's scripts under /home/user/public_html/cgi-bin ... At first I put /home/bodegonweb/public_html/cgi-bin/test-cgi the same script that works fine under /usr/lib/cgi-bin I have red the Apache Documentation but until today I couldn't get to works fine ... Apache documentation says There are many ways to give each user directory a cgi-bin directory such that anything requested as http://example.com/~user/cgi-bin/program will be executed as a CGI script. Two alternatives are: 1.. Place the cgi-bin directory next to the public_html directory: ScriptAliasMatch ^/~([^/]*)/cgi-bin/(.*) /home/$1/cgi-bin/$2 2.. Place the cgi-bin directory underneath the public_html directory: Directory /home/*/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory If you are using suexec, the first technique will not work because CGI scripts must be stored under the public_html directory. I have used the second option under the virtualhost of the user # # Dominio bodegonweb.com.ar # VirtualHost 200.68.76.51 ServerAdmin [EMAIL PROTECTED] ServerName www.bodegonweb.com.ar DocumentRoot /home/bodegonweb/public_html Directory /home/bodegonweb/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory #ErrorLog logs/host.some_domain.com-error.log #CustomLog logs/host.some_domain.com-access.log common /VirtualHost == Unfortunately, if you want to use suexec, you'll need to recompile it to allow /home/*/public_html/cgi-bin/ otherwise it won't run. Really? I have found that Woody's apache-perl package does this fine out of the box. Unless I'm really missing something. Nestor, give us some error log output to go on, please. I agree... some error log output would help, but regarding apache-perl or mod_perl, it actually runs under the Apache user/group id unless there is something really new I don't know about. Are you absolutely sure that the scripts you run, are run as the user's own user and group ID, and not the Apache one? Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Default Apache 404 for all sites
- Original Message - From: Gene Grimm [EMAIL PROTECTED] To: Leonardo Boselli [EMAIL PROTECTED] Cc: [EMAIL PROTECTED] Sent: 14 July, 2003 10:36 AM Subject: Re: Default Apache 404 for all sites Leonardo Boselli wrote: I was told to set a script in php or perl that looks if in the directory where the called poage were an 404.php or 404.html file, if so include that, otherwise go up a level and try again, if one reach the home of the domain and does not find any 404 then use the default one ... if you coded the script please send it to me . What I've done is this, and it seems to work for me: Alias /errors/ /usr/share/apache/errors/ ErrorDocument 400 /errors/error.php ErrorDocument 401 /errors/error.php ErrorDocument 403 /errors/error.php ErrorDocument 404 /errors/error.php ErrorDocument 408 /errors/error.php ErrorDocument 500 /errors/error.php It's a PHP script that reads the error code and displays a generically formatted error message for all sites on my server. Indeed, this was the solution I was looking for!!! I tested it, and confirm it works by default on all sites (when you put it above all Virtualhosts), and the user can override it by making their own .htaccess file. Much appreciated! Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: cgi-bin directory under home/user/public_html
- Original Message - From: Peter An. Zyumbilev [EMAIL PROTECTED] To: Jason Lim [EMAIL PROTECTED] Sent: 14 July, 2003 2:56 AM Subject: RE: cgi-bin directory under home/user/public_html Have you tried adding this ? AddHandler cgi-script .pl BIVOL Yes, that would work if you don't mind having the scripts running under the Apache user/group. If you want to run under the user through SUEXEC, you need to recompile SUEXEC. There is no quick way around it, unfortunately. -Original Message- From: Jason Lim [mailto:[EMAIL PROTECTED] Sent: Sunday, July 13, 2003 9:19 PM To: Nestor R. Mazza; DEBIAN debian-isp Subject: Re: cgi-bin directory under home/user/public_html - Original Message - From: Nestor R. Mazza [EMAIL PROTECTED] To: DEBIAN debian-isp debian-isp@lists.debian.org Sent: 13 July, 2003 11:06 PM Subject: cgi-bin directory under home/user/public_html Hi My server is Debian Woody 3.0r1 Apache/1.3.26 (Unix) Debian GNU/Linux PHP/4.1.2 mod_perl/1.26 All the scripts works fine under original directories but now I want to put the user's scripts under /home/user/public_html/cgi-bin ... At first I put /home/bodegonweb/public_html/cgi-bin/test-cgi the same script that works fine under /usr/lib/cgi-bin I have red the Apache Documentation but until today I couldn't get to works fine ... Apache documentation says There are many ways to give each user directory a cgi-bin directory such that anything requested as http://example.com/~user/cgi-bin/program will be executed as a CGI script. Two alternatives are: 1.. Place the cgi-bin directory next to the public_html directory: ScriptAliasMatch ^/~([^/]*)/cgi-bin/(.*) /home/$1/cgi-bin/$2 2.. Place the cgi-bin directory underneath the public_html directory: Directory /home/*/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory If you are using suexec, the first technique will not work because CGI scripts must be stored under the public_html directory. I have used the second option under the virtualhost of the user # # Dominio bodegonweb.com.ar # VirtualHost 200.68.76.51 ServerAdmin [EMAIL PROTECTED] ServerName www.bodegonweb.com.ar DocumentRoot /home/bodegonweb/public_html Directory /home/bodegonweb/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory #ErrorLog logs/host.some_domain.com-error.log #CustomLog logs/host.some_domain.com-access.log common /VirtualHost == Unfortunately, if you want to use suexec, you'll need to recompile it to allow /home/*/public_html/cgi-bin/ otherwise it won't run. There are docs on this common issue. Google is your friend. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Default Apache 404 for all sites
- Original Message - From: Gene Grimm [EMAIL PROTECTED] To: Leonardo Boselli [EMAIL PROTECTED] Cc: debian-isp@lists.debian.org Sent: 14 July, 2003 10:36 AM Subject: Re: Default Apache 404 for all sites Leonardo Boselli wrote: I was told to set a script in php or perl that looks if in the directory where the called poage were an 404.php or 404.html file, if so include that, otherwise go up a level and try again, if one reach the home of the domain and does not find any 404 then use the default one ... if you coded the script please send it to me . What I've done is this, and it seems to work for me: Alias /errors/ /usr/share/apache/errors/ ErrorDocument 400 /errors/error.php ErrorDocument 401 /errors/error.php ErrorDocument 403 /errors/error.php ErrorDocument 404 /errors/error.php ErrorDocument 408 /errors/error.php ErrorDocument 500 /errors/error.php It's a PHP script that reads the error code and displays a generically formatted error message for all sites on my server. Indeed, this was the solution I was looking for!!! I tested it, and confirm it works by default on all sites (when you put it above all Virtualhosts), and the user can override it by making their own .htaccess file. Much appreciated! Jas
Re: cgi-bin directory under home/user/public_html
- Original Message - From: Nestor R. Mazza [EMAIL PROTECTED] To: DEBIAN debian-isp [EMAIL PROTECTED] Sent: 13 July, 2003 11:06 PM Subject: cgi-bin directory under home/user/public_html Hi My server is Debian Woody 3.0r1 Apache/1.3.26 (Unix) Debian GNU/Linux PHP/4.1.2 mod_perl/1.26 All the scripts works fine under original directories but now I want to put the user's scripts under /home/user/public_html/cgi-bin ... At first I put /home/bodegonweb/public_html/cgi-bin/test-cgi the same script that works fine under /usr/lib/cgi-bin I have red the Apache Documentation but until today I couldn't get to works fine ... Apache documentation says There are many ways to give each user directory a cgi-bin directory such that anything requested as http://example.com/~user/cgi-bin/program will be executed as a CGI script. Two alternatives are: 1.. Place the cgi-bin directory next to the public_html directory: ScriptAliasMatch ^/~([^/]*)/cgi-bin/(.*) /home/$1/cgi-bin/$2 2.. Place the cgi-bin directory underneath the public_html directory: Directory /home/*/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory If you are using suexec, the first technique will not work because CGI scripts must be stored under the public_html directory. I have used the second option under the virtualhost of the user # # Dominio bodegonweb.com.ar # VirtualHost 200.68.76.51 ServerAdmin [EMAIL PROTECTED] ServerName www.bodegonweb.com.ar DocumentRoot /home/bodegonweb/public_html Directory /home/bodegonweb/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory #ErrorLog logs/host.some_domain.com-error.log #CustomLog logs/host.some_domain.com-access.log common /VirtualHost == Unfortunately, if you want to use suexec, you'll need to recompile it to allow /home/*/public_html/cgi-bin/ otherwise it won't run. There are docs on this common issue. Google is your friend. -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Default Apache 404 for all sites
Hi All, While not specifically Debian, I'm sure you guys figured this one out ages ago. In Apache, I know you can set ErrorDocument 404 /404.html or similar in a per-site context, but do you know if a standard one can be used to replace the Apache one? That way, one wouldn't need to dump a whole bunch of 404.html files into each public_html, and it'd work instantly across all sites. AND if a user setup their own .htaccess and override the default 404, it would work (ie. use their own 404 page). I was hoping there would be a way to load up a 404.html from the filesystem, so something like ErrorDocument 404 /home/default/404.html could be done for all sites as a default... or at least some sort of workaround. Do you guys have some better method or idea, rather than copying the 404.html webpage into all the sites? Thanks in advance. Jas -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: cgi-bin directory under home/user/public_html
- Original Message - From: Nestor R. Mazza [EMAIL PROTECTED] To: DEBIAN debian-isp debian-isp@lists.debian.org Sent: 13 July, 2003 11:06 PM Subject: cgi-bin directory under home/user/public_html Hi My server is Debian Woody 3.0r1 Apache/1.3.26 (Unix) Debian GNU/Linux PHP/4.1.2 mod_perl/1.26 All the scripts works fine under original directories but now I want to put the user's scripts under /home/user/public_html/cgi-bin ... At first I put /home/bodegonweb/public_html/cgi-bin/test-cgi the same script that works fine under /usr/lib/cgi-bin I have red the Apache Documentation but until today I couldn't get to works fine ... Apache documentation says There are many ways to give each user directory a cgi-bin directory such that anything requested as http://example.com/~user/cgi-bin/program will be executed as a CGI script. Two alternatives are: 1.. Place the cgi-bin directory next to the public_html directory: ScriptAliasMatch ^/~([^/]*)/cgi-bin/(.*) /home/$1/cgi-bin/$2 2.. Place the cgi-bin directory underneath the public_html directory: Directory /home/*/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory If you are using suexec, the first technique will not work because CGI scripts must be stored under the public_html directory. I have used the second option under the virtualhost of the user # # Dominio bodegonweb.com.ar # VirtualHost 200.68.76.51 ServerAdmin [EMAIL PROTECTED] ServerName www.bodegonweb.com.ar DocumentRoot /home/bodegonweb/public_html Directory /home/bodegonweb/public_html/cgi-bin Options ExecCGI SetHandler cgi-script /Directory #ErrorLog logs/host.some_domain.com-error.log #CustomLog logs/host.some_domain.com-access.log common /VirtualHost == Unfortunately, if you want to use suexec, you'll need to recompile it to allow /home/*/public_html/cgi-bin/ otherwise it won't run. There are docs on this common issue. Google is your friend.
Re: Rootkit?
Okay, I hate to say it... but this is EXACTLY what I found. Hope te following helps a bit from my recent experience Search the archives for my discussion on Debian.ISP regarding this. I remember VERY DISTINCTLY the gzip problem... because i thought WTF... invalid option -d. The files had different names... but they were basically the same thing (ie. replacing the same files). I also remember the gzip problem because when I tried to run dpkg, apt-get and stuff like that, it couldn't extract the compressed files either... which lead me to running gzip -d and hence the reason I remember this distinctly. I DOUBT this is a virus. The reason is because after close inspection, it couldn't be self replicating as the box appeared to be getting more rooted the more I looked around, indicating either someone was still logged in, or something strange. Plus if you run strace somecommand... with somecommand as one of the rooted files, I remmeber the strace output to be really short (abnormally short) compared with the real, regular, untained output. My solution? I did mv /bin /bin.hacked ; mv /sbin /sbin.hacked and so forth (for later inspection and discovery), then copied backup files from other server to that one. How? I put in a clean 80Gb hard disk into one of the un-rooted servers, mounted it, cp -a /bin /mnt/freshdrive/ (and so forth), then plugged it into the rooted server and copied all the files over. Ensure you boot from a bootfloppy or something to ensure the kernel is untained and stuff. Also, make sure you upgrade your kernel to 2.4.21. I SUSPECT one of the reasons we were rooted is because we were waiting for Debian to come out with either a patched kernel source or a new one, and in the mean time it was rooted. Debian was STRANGELY slow to release this... usually Debian is pretty fast at releasing security updates, but anyway. For your reference, that is the ptrace bug (lots of coverage on this... affects 2.4.18). Check your config files too. I did NOT find any /etc files and similar to be tainted, but you may want to make sure. Also, Russell Coker recommended SE LINUX and some others recommended the other anti-hack kernel mods. I am investigating these and 99% will start using one, just need to find one that offers additional protection WITHOUT needing a whole bunch of new config files to make and set, because we roll each kernel to a bunch of servers, and each server is a bit different, and it's a headache to have to customize the policy settings and stuff for each. Reaching a compromise between security and easy-of-use is the goal (haha want max security, enable nothing but ssh... but then again, even ssh was rooted... maybe netBSD or openBSD would offer the best protection of any OS). And btw... the way our Debian server got hacked, and now another Debian server... is there a rootkit that is SPECIALIZED in hacking Debian servers now? I know there are lots for Redhat (7.3, 8, 9) but not for Debian... maybe this is a new hole/rootkit targetted at us all? (btw. sorry for top posting... just wanted to help this guy out quickly, as I remember the frustration I had when it happened to me) - Original Message - From: Domainbox, Tim Abenath [EMAIL PROTECTED] To: [EMAIL PROTECTED] Sent: Friday, July 11, 2003 7:00 PM Subject: Rootkit? Hello, In our Serverfarm i found different Machines not working properly. They show up complaining: webbox:/chkrootkit# gzip -d gzip: invalid option -- d Segmentation fault The binarys running are take a look at /proc/uptime, what they are not supposed to do: webbox:/chkrootkit# strace -eopen ls open(/etc/ld.so.preload, O_RDONLY)= -1 ENOENT (No such file or directory) open(/etc/ld.so.cache, O_RDONLY) = 3 open(/lib/librt.so.1, O_RDONLY) = 3 open(/lib/libc.so.6, O_RDONLY)= 3 open(/lib/libpthread.so.0, O_RDONLY) = 3 open(/proc/uptime, O_RDONLY) = 3 open(/proc/4215/exe, O_RDONLY)= 3 --- SIGCHLD (Child exited) --- open(/dev/null, O_RDONLY|O_NONBLOCK|O_DIRECTORY) = -1 ENOTDIR (Not a directory) open(., O_RDONLY|O_NONBLOCK|O_LARGEFILE|O_DIRECTORY) = 5 open(/etc/mtab, O_RDONLY) = 5 open(/proc/meminfo, O_RDONLY) = 5 ACKNOWLEDGMENTS README check_wtmpxchkdirs.c chkpro chkrootkit chkwtmp.cstrings COPYRIGHTREADME.chklastlog check_wtmpx.c chklastlogchkproc chkrootkit.lsm ifpromiscstrings.c Makefile README.chkwtmp chkdirschklastlog.c chkproc.c chkwtmp ifpromisc.c webbox:/chkrootkit# Is this an rootkit installed, has someone experienced stuff like this? The machine's are running debian 3.0 with differents kernel's 2.4.18-bf2.4 or an static 2.4.20 [EMAIL PROTECTED] the countless lonely voices, like whispers in the dark... -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED] -- To UNSUBSCRIBE, email to [EMAIL