Re: network monitoring
Am Sa, den 30.10.2004 schrieb martin f krafft um 14:25: I would like to monitor all the nodes of a cluster, but I am rather pressed for time so that I cannot investigate all the options. [...] So my question is: which network monitoring system would you recommend, given my requirements? How big is your cluster and what do you want to monitor? Have you already looked at Nagios? (http://www.nagios.org) I'm currently setting up another Nagios server for a customer who will monitor all of his systems, be it switches, router, an ordinary server or bunch of cluster-nodes. You'll have to write a few configuration files for all the services and each client you want to monitor, but if all nodes in the cluster are similar, it wont be too much work... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 15 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: network monitoring
Am Sa, den 30.10.2004 schrieb martin f krafft um 15:00: Argh. Even with nagios-text, it wants to pull in Samba and MySQL stuff. I don't want either of these installed. Just use the source and compile it yourself - it doesn't have many dependencies (works like a charm with woody) and has a quite good configuration-sample. If you need help, you can also contact me off-list as I have compiled it just a few hours ago... :o) best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 15 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: How to get hpasm module on HP Proliant?
Am Fr, den 20.08.2004 schrieb Lucas Albers um 20:02: We need a page for debian+hp solutions. I'm sure the information is out their, as many debian machines run on hp hardware, but damn if I can track it down to one logical location... I'm currently working on a website for that: www.debian-on-proliant.com Currently there is only a basic framework and a hardware-matrix online, but I hope to get some real content soon - just haven't had much free time during the last weeks. Comments, suggestions and especially contributions are welcome! best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: FW: Woody and HP DL320G2
Am Di, den 03.08.2004 schrieb IT-at-Challenge um 7:56: I am preparing to buy a new HP server, a HP DL320G2, and would like to install Woody onto it. The questions I have relate to the: - On-board NICs, given on the HP site as Two NC7760 PCI Gigabit Server Adapters (embedded) - the ATA RAID controller, given as Integrated Dual Channel Ultra ATA/100 Adapter with Integrated ATA RAID 0, 1 - video, given as Integrated ATI RAGE XL Video Controller with 8-MB SDRAM Video Memory Will woody with the standard bf2.4 kernel detect the NIC's and RAID controller? No. The onboard NICs will probably not work with bf24 as they are afaik based on the bcm57xx chipset which is supported starting from 2.4.19 - Woody bf24 is 2.4.18. But this is not a real problem... The ATARAID may or may not work - I have no idea which chipset they are currently using. Can anyone shed some light on this? As I'm currently building a website about running Debian on ProLiant this information would be really appreciated... Will I need to compile my own kernel to do this? You can, but won't have to - at least for the NIC part. Just download the drivers from Broadcom and compile them against 2.4.18-bf24 and load them during setup (preload modules from floppy). Or got to my website, grab the modules I've prepared for Woody: http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/ Or, should I try to use Sarge? Sarge will probably work out of the box. At least the last time I tried I could install a DL140 without any problems... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: hardware/optimizations for a download-webserver
Am Fr, den 16.07.2004 schrieb Henrik Heil um 20:53: Hello, please excuse my general questions. A customer asked me to setup a dedicated webserver that will offer ~30 files (each ~5MB) for download and is expected to receive a lot of traffic. Most of the users will have cable modems and their download speed should not drop below 50KB/sec. My questions are: What would be an adequate hardware to handle i.e. 50(average)/150(peak) concurrent downloads? What is the typical bottleneck in this setup? What optimizations should i apply to a standard woody or sarge installation? (anything kernelwise?) Maybe I'm too optimistic, but I really don't think you will max out any halfway decent server with this load... 30 x 5 MB will give you 150MB content. This should be easily cached in RAM, even without something like a ramdisk as linux does this by itself. Disk I/O should not be a problem. Furthermore the content seems to be static - no need for a fast CPU. 150 concurrent downloads will be no problem for Apache, even with the default settings. Only if you want to spawn more than 512 (?) child-processes you'll have to recompile and increase HARD_SERVER_LIMIT. Summary: Don't bother with tuning the server and don't even think about setting up a cluster for something like this - definitely overkill. ;o) I've a Debian box here which currently serves more than 160 req/second of dynamic content - no problem at all. The HTTP-cluster next to it is intended to handle WAY bigger loads... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian on HP proliant
Am Do, den 22.04.2004 schrieb Lucas Albers um 19:54: Hu? I installed Woody (bf24) on a couple of DL380G3 without a hitch - the cciss works just fine and you can of course boot from it. The only special thing I do is to load the module for the installed NIC (Broadcom bcm57xx - tg3.o) so I can download a new kernel as soon as the base-system is installed... We are planning to get some proliant DL380G2 systems. With the HP Smart Array HP Smart Array 6402 controller. Do you really mean DL380G2 and not DL380G3? G2 is out-of-production for quite some time now... The DL380G3 has a SmartArray 5i onboard, so you wont need an extra RAID controller unless you need more channels. You installed onto this system using sarge? Or drivers disks with bf24? I'm very interested in your setup steps. I just installed another DL380G3 yesterday with Woody. Even using iLO as no monitor was nearby and I was too lazy to get one... Here's my procedure - definitely easier than Nathans approach ;o) Prepare a floppy with the module for the GbE interfaces. Get the source from Broadcom and compile against 2.4.18-bf24 or use the module from my website (http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/). Copy bmc5700.o to your floppy into the /boot directory. - Put in a standard Woody CD, boot from it and start with bf24 - Continue as on any other system - Before you can setup your network, choose preload modules from floppy and insert the disk with the module and load it - Configure network and continue as usual - Reboot - Before finishing the installation, change to another console and load module from floppy again [1] and setup your network. - Switch back to the first console and continue with installation, download security-fixes and maybe a new kernel [2] I think it's quite straightforward, as you just need to preload a single modules from floppy - the rest is just another Woody setup... And if you want sarge, well, install woody and update to sarge - definitely a lot less work ;o) [1] Copy it to your disk and adapt /etc/modules if you want to continue using 2.4.18-bf24. I usually install a current kernel before I reboot again so I don't care... [2] I've packaged DL380G3 kernels and the corresponding .config on my website - they're used on quite a few servers. -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: RaiserFS via NFS
Am Mo, den 19.04.2004 schrieb George Georgalis um 04:40: Hi, You might like DRBD better than AFS, I think AFS is more suited, to allow multiple servers to serve /usr/bin, ie static partitions. /var or /home partitions need something different. Coda does sound good. ...just following these, not using them yet, I think inter-mezzo is too young still, links: http://www.drbd.org/ Drbd is a block device which is designed to build high availability clusters. This is done by mirroring a whole block device via (a dedicated) network. You could see it as a network raid-1. As you already wrote - DRBD is a block device, not a filesystem. You have to run a filesystem (like reiserfs oder ext3) on top of it, just as you would have to with a normal block device like a SCSI RAID. Comparing DRBD to NFS or AFS, well, apples and oranges... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RaiserFS via NFS
Am Mo, den 19.04.2004 schrieb George Georgalis um 19:28: As you already wrote - DRBD is a block device, not a filesystem. You have to run a filesystem (like reiserfs oder ext3) on top of it, just as you would have to with a normal block device like a SCSI RAID. Comparing DRBD to NFS or AFS, well, apples and oranges... of course you have to install a fs on a block device. the question was about network filesystem operability. DRBD to NFS seems like a fair comparison to me, since they are different. As I understand Andreas, he wants to replace NFS with something different as he has problems with access rights - DRBD is no solution for this problem (as it is no filesystem). You can use DRBD to get a redundant (active/passive) block device using a two-node cluster setup, so you can access your data on the backup node as soon as the primary goes down. You currently CANNOT mount a DRBD block-device simultaneously from more than one node. Active/active is going to be possible in near (?) future, but you'll still need a cluster-filesystem (OpenGFS?) which wont crash as soon as it is mounted more than once... But to access data from various machines you'll have to use NFS, Samba or something like that - on top of $filesystem (reiserFS, ext3, XFS) which itself resides on top of $blockdevice (IDE, SCSI or with an extra layer in between such as DRBD or LVM). How's your experience with coda, lustre or afs? Haven't used any of them yet as I didn't need any of their features... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RaiserFS via NFS
Am Mo, den 19.04.2004 schrieb George Georgalis um 19:28: As you already wrote - DRBD is a block device, not a filesystem. You have to run a filesystem (like reiserfs oder ext3) on top of it, just as you would have to with a normal block device like a SCSI RAID. Comparing DRBD to NFS or AFS, well, apples and oranges... of course you have to install a fs on a block device. the question was about network filesystem operability. DRBD to NFS seems like a fair comparison to me, since they are different. As I understand Andreas, he wants to replace NFS with something different as he has problems with access rights - DRBD is no solution for this problem (as it is no filesystem). You can use DRBD to get a redundant (active/passive) block device using a two-node cluster setup, so you can access your data on the backup node as soon as the primary goes down. You currently CANNOT mount a DRBD block-device simultaneously from more than one node. Active/active is going to be possible in near (?) future, but you'll still need a cluster-filesystem (OpenGFS?) which wont crash as soon as it is mounted more than once... But to access data from various machines you'll have to use NFS, Samba or something like that - on top of $filesystem (reiserFS, ext3, XFS) which itself resides on top of $blockdevice (IDE, SCSI or with an extra layer in between such as DRBD or LVM). How's your experience with coda, lustre or afs? Haven't used any of them yet as I didn't need any of their features... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: RaiserFS via NFS
Am So, den 18.04.2004 schrieb Andrew Miehs um 01:16: I suggest you all read http://www.porcupine.org/postfix-mirror/newdoc/NFS_README.html Especially the sentence 'Thus, Postfix on NFS is slightly less reliable than Postfix on a local disk.' Either something is reliable or not. there is no such thing as slightly less reliable. I suggest you read it first... Quote from NFS_README.html: --- # In order to have mailbox locking over NFS you have to configure # everything to use fcntl() locks for mailbox access (or switch to # maildir style, which needs no application-level lock controls). So if you use maildir (which you probably will with a setup like this) you won't have any problems with NFS at all. And yes, we run setups like this - with some hundred thousand mails per day - and never had a problem... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RaiserFS via NFS
Am So, den 18.04.2004 schrieb Andrew Miehs um 01:16: I suggest you all read http://www.porcupine.org/postfix-mirror/newdoc/NFS_README.html Especially the sentence 'Thus, Postfix on NFS is slightly less reliable than Postfix on a local disk.' Either something is reliable or not. there is no such thing as slightly less reliable. I suggest you read it first... Quote from NFS_README.html: --- # In order to have mailbox locking over NFS you have to configure # everything to use fcntl() locks for mailbox access (or switch to # maildir style, which needs no application-level lock controls). So if you use maildir (which you probably will with a setup like this) you won't have any problems with NFS at all. And yes, we run setups like this - with some hundred thousand mails per day - and never had a problem... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: debian on HP proliant
Am Sa, den 17.04.2004 schrieb Nathan Eric Norman um 18:22: The installer from woody has built-in support for the cciss controller on at least the Proliant DL 580 G2. It works smoothly, but lacks support for the default installed 3com gig-ethernet adapter (tg3 driver), once installed, The network installer for sarge detects the t3 gig-ethernet adaptor automagically. --We're moving to Sarge now. This is true, but d-i doesn't support booting off the SmartArray because the cciss driver is a module. I already installed onto a DL360, but couldn't install a bootblock. Hu? I installed Woody (bf24) on a couple of DL380G3 without a hitch - the cciss works just fine and you can of course boot from it. The only special thing I do is to load the module for the installed NIC (Broadcom bcm57xx - tg3.o) so I can download a new kernel as soon as the base-system is installed... I don't want to put too much time into this; our company has a lot of Compaq/HP and I've been asked to find out how hard it is to install Debian. If it's too hard (read: time invested is too high) then we'll go buy IBM instead for the Linux servers. We have a lot of them in production too - BECAUSE they're fast to setup and work flawlessly... ;o) best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian on HP proliant
Am Mi, den 14.04.2004 schrieb Christopher Sharp um 21:36: On Sat, 17 Jan 2004 16:11:26 +0100, Markus Oswald [EMAIL PROTECTED] wrote: Having said that, the ProLiant ML330 come with an ATA-RAID which is based on an LSI chipset (MegaIDE) which is not supported by Debian - the only driver available is a half GNU, half closed-source driver. Furthermore the drives attached to those IDE-Ports are not accessible as normal IDE devices (i.e. /dev/hda) so you basically get a machine without any usable IDE interface except for one which is attached to the CD-ROM. If you buy one of these machines you'll either have to use a model with SCSI controller or install an extra IDE-controller. I got this booting in a lab with the on-board ATA-RAID using a bf2.4 kernel some weeks ago (February). I was using an HP DL320 server. The only issue with the bf2.4 kernel was requiring a net module for the NIC which I managed to succesfully extracate from the rpm and insmod. Which chipset is being used on current DL320G3 machines? Most Promise chipsets work just fine with Woody. The ML330 uses another chipset without OSS drivers... BTW: If the NIC is a Broadcom bcm57xx (as in most newer ProLiant I've seen) - you can use the drivers from Broadcom and compile them against 2.4.18-bf24 or grab the binaries from my homepage: http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/ (Note: you'll need the bcm5700.o NOT the tg3.o. Kernels from 2.4.19 upwards include tg3.o which will work with the bcm57xx chipset) I'm now trying to do the same using the new debian-installer and testing distribution but notice that there's no megaide.o module/driver in the new three-floppy testing distribution. I've got the shim source for the driver from LSI but having compiled it on another 2.4.25 testing box and copied it onto a floppy the module is refusing to insmod on my debian-installer box. As I already said: megaide.o includes proprietary code and therefore cannot be included within Debian or the stock kernel. You'll have to compile it yourself and put it on a boot floppy just as Lucas Albers described it a few days ago. Furthermore a module compiled against 2.4.25 wont work with 2.4.18! Before I start building a custom debian-installer rescue floppy with a customised kernel including this module I wondered if anyone knew of a module floppy that might have a working LSI ATA-RAID kernel module on it. I doubt LSI will allow a binary redistribution of their (partly proprietary) drivers... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian on HP proliant
Am Mi, den 14.04.2004 schrieb Christopher Sharp um 21:36: On Sat, 17 Jan 2004 16:11:26 +0100, Markus Oswald [EMAIL PROTECTED] wrote: Having said that, the ProLiant ML330 come with an ATA-RAID which is based on an LSI chipset (MegaIDE) which is not supported by Debian - the only driver available is a half GNU, half closed-source driver. Furthermore the drives attached to those IDE-Ports are not accessible as normal IDE devices (i.e. /dev/hda) so you basically get a machine without any usable IDE interface except for one which is attached to the CD-ROM. If you buy one of these machines you'll either have to use a model with SCSI controller or install an extra IDE-controller. I got this booting in a lab with the on-board ATA-RAID using a bf2.4 kernel some weeks ago (February). I was using an HP DL320 server. The only issue with the bf2.4 kernel was requiring a net module for the NIC which I managed to succesfully extracate from the rpm and insmod. Which chipset is being used on current DL320G3 machines? Most Promise chipsets work just fine with Woody. The ML330 uses another chipset without OSS drivers... BTW: If the NIC is a Broadcom bcm57xx (as in most newer ProLiant I've seen) - you can use the drivers from Broadcom and compile them against 2.4.18-bf24 or grab the binaries from my homepage: http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/ (Note: you'll need the bcm5700.o NOT the tg3.o. Kernels from 2.4.19 upwards include tg3.o which will work with the bcm57xx chipset) I'm now trying to do the same using the new debian-installer and testing distribution but notice that there's no megaide.o module/driver in the new three-floppy testing distribution. I've got the shim source for the driver from LSI but having compiled it on another 2.4.25 testing box and copied it onto a floppy the module is refusing to insmod on my debian-installer box. As I already said: megaide.o includes proprietary code and therefore cannot be included within Debian or the stock kernel. You'll have to compile it yourself and put it on a boot floppy just as Lucas Albers described it a few days ago. Furthermore a module compiled against 2.4.25 wont work with 2.4.18! Before I start building a custom debian-installer rescue floppy with a customised kernel including this module I wondered if anyone knew of a module floppy that might have a working LSI ATA-RAID kernel module on it. I doubt LSI will allow a binary redistribution of their (partly proprietary) drivers... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: using hp proliant ml 330
Am Do, den 08.04.2004 schrieb Lucas Albers um 20:09: I got it work, but I was trying to make boot floppies so I could load the drivers from the install cd, so I could install direct on it. Could not find directions on this anywhere, or how to compile it statically in the kernel. As far as I can remember you couldn't compile the drive statically in the kernel - probably due to licensing issues. I should be possible to put into the '/boot' directory on a floppy and load it during the setup process (i.e. preload modules from floppy). The Controller used in the ProLiant ML330 series is an IDE-RAID and most of the logic is not done by the controller but by the driver itself. So performance will probably suck... My links refer to source to compile the drivers as a module. It's gpl released. I just took a look at the files I got from LSI (who now own AMI) and the driver is half GPL, half proprietary. Quoting megaide-shimdriver-readme.txt: LSI Logic's Shim driver has its raid intelligence as binary file megaide_lib.o and the rest of the driver is open. megaide_lib.o can be build with the open source to get driver image megaide.o. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: using hp proliant ml 330
Am Do, den 08.04.2004 schrieb Lucas Albers um 20:09: I got it work, but I was trying to make boot floppies so I could load the drivers from the install cd, so I could install direct on it. Could not find directions on this anywhere, or how to compile it statically in the kernel. As far as I can remember you couldn't compile the drive statically in the kernel - probably due to licensing issues. I should be possible to put into the '/boot' directory on a floppy and load it during the setup process (i.e. preload modules from floppy). The Controller used in the ProLiant ML330 series is an IDE-RAID and most of the logic is not done by the controller but by the driver itself. So performance will probably suck... My links refer to source to compile the drivers as a module. It's gpl released. I just took a look at the files I got from LSI (who now own AMI) and the driver is half GPL, half proprietary. Quoting megaide-shimdriver-readme.txt: LSI Logic's Shim driver has its raid intelligence as binary file megaide_lib.o and the rest of the driver is open. megaide_lib.o can be build with the open source to get driver image megaide.o. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: using hp proliant ml 330
Am Do, den 08.04.2004 schrieb Guillaume Plessis um 00:42: Hello Being lucky with Google may give you the solution : http://www.campbell-lange.net/linux/ AMI MegaRAID != AMI MegaIDE The Controller used in the ProLiant ML330 series is an IDE-RAID and most of the logic is not done by the controller but by the driver itself. So performance will probably suck... The controller will need proprietary drivers as AMI wants to protect their intellectual property - despite RAID0/1 being quite simple... As Lucas wrote, you CAN use it with Debian, but I would advise against it. Updating the kernel will be more work and you cannot even quickly recover your system with Knoppix (or something alike) because of the proprietary modules. We ditched our ML330 after a few days and replaced it with a DL380 - a little bit more expensive, but worth the money. Basically the ML330 is a nice machine - except for the IDE subsystem (there is only one other IDE port for the CD-ROM, so you can't even use Software-RAID as the HDD won't be detected by the OS!). So if you really want a cheap dual Xeon tower server made by HP/Compaq, either get an extra IDE controller, buy the SCSI version or use a RAID controller wich is supported by the kernel. It will save you a lot of time and headaches... ;o) best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: Which SATA RAID controller?
Am Mi, den 24.03.2004 schrieb Craig Sanders um 00:31: anyone have any opinions about the adaptec 2400 (ATA) or 2410 (SATA)? they have driver support in 2.4.x and 2.6.x kernels - no idea how good, though. We have a 2410 in our backup-server working flawlessly with a 2.6.4 kernel. AFAIR it did work with 2.4 during the burn-in test but as soon as we wanted to install the production system it wasn't recognized anymore until we switched to 2.6. From dmesg: Red Hat/Adaptec aacraid driver (1.1.2-lk1 Mar 17 2004) AAC0: kernel 4.1.4 build 9965 AAC0: monitor 4.1.4 build 9965 AAC0: bios 4.1.0 build 5912 AAC0: serial b9c379fafaf001 scsi0 : aacraid Vendor: ADAPTEC Model: AAR-2410SA RAID5 Rev: V1.0 Type: Direct-Access ANSI SCSI revision: 02 SCSI device sda: 960344832 512-byte hdwr sectors (491697 MB) sda: Write Protect is off sda: Mode Sense: 03 00 00 00 SCSI device sda: drive cache: write through sda: sda1 sda2 sda3 Attached scsi removable disk sda at scsi0, channel 0, id 0, lun 0 unlike the 3ware cards (or any other IDE/SATA raid cards i've heard of), they do have a large (128MB) write-cache - which is essential for raid-5 performance. We have 4 160GB Maxtor drives attached to it, configured as RAID5. If anyone is interested I can do a quick bonnie++ benchmark, though I don't know if our card has 64MB or 128MB. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Woody on Proliant ML350 G3 (smartarray 641)
Am Mi, den 11.02.2004 schrieb Emmanuel Halbwachs um 20:12: Hello everybody, I've just suscribed to the list after discovering it recently. I'm not strictly an ISP, but I provide various services for 150-200 users. I would like to run woody on HP Compaq Proliant ML350 G3 (no choice of the model because of public market reasons). Before buying some machines, I would like to check if woody can be installed on. Actually, colleagues of mine own some (running FreeBSD) and proposed me to try to install woody on one box. The hardware is : raid controller : smartarray 641 ethernet NIC : BCM5702 (subsystem : NC7760) This will be my first woody install on raid hardware, so I'm inexperienced. Colleagues told me that woody install fails due to old kernel 2.4.18-bf24 which doesn't include recent modules for the raid (cciss) and the NIC (tg3 seems better than bcm5700). I've searched the list archive but I didn't really find an answer. I don't know for sure about the RAID controller [1] but to get the NIC in a ProLiant DL380G3 (a BMC57xx too) working I compiled the driver from Broadcom against a 2.4.18-bf24 source. This way I get modules which can be used with the woody bf24 kernel so I can setup the system and download a newer kernel to the system. Beginning with 2.4.19 you can use the tg3.o module supplied by the kernel... You can grab the compiled modules from my repository (http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/) or the source directly from Broadcom (http://www.broadcom.com/drivers/) [1] It may work with the cciss module just as the SmartArray 5i does - but I read somewhere about a bug in the driver which wasn't fixed until 2.4.21. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Woody on Proliant ML350 G3 (smartarray 641)
Am Mi, den 11.02.2004 schrieb Emmanuel Halbwachs um 20:12: Hello everybody, I've just suscribed to the list after discovering it recently. I'm not strictly an ISP, but I provide various services for 150-200 users. I would like to run woody on HP Compaq Proliant ML350 G3 (no choice of the model because of public market reasons). Before buying some machines, I would like to check if woody can be installed on. Actually, colleagues of mine own some (running FreeBSD) and proposed me to try to install woody on one box. The hardware is : raid controller : smartarray 641 ethernet NIC : BCM5702 (subsystem : NC7760) This will be my first woody install on raid hardware, so I'm inexperienced. Colleagues told me that woody install fails due to old kernel 2.4.18-bf24 which doesn't include recent modules for the raid (cciss) and the NIC (tg3 seems better than bcm5700). I've searched the list archive but I didn't really find an answer. I don't know for sure about the RAID controller [1] but to get the NIC in a ProLiant DL380G3 (a BMC57xx too) working I compiled the driver from Broadcom against a 2.4.18-bf24 source. This way I get modules which can be used with the woody bf24 kernel so I can setup the system and download a newer kernel to the system. Beginning with 2.4.19 you can use the tg3.o module supplied by the kernel... You can grab the compiled modules from my repository (http://people.iirc.at/moswald/linux/bf24_modules/bcm5700/) or the source directly from Broadcom (http://www.broadcom.com/drivers/) [1] It may work with the cciss module just as the SmartArray 5i does - but I read somewhere about a bug in the driver which wasn't fixed until 2.4.21. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: Remote server management
Am Sa, den 07.02.2004 schrieb Micah Anderson um 00:26: Since we often have limited physical access to our machines, and our collective members are spread around the country, our holy grail is remote hardware administration. This could mean a lot of things. Mostly, we just need to: 1. power cycle computers remotely 2. access the bios and boot menu remotely This and more can all be done with HP/Compaq's iLO (integrated Lights Out) which is included in many newer ProLiant server. Take a look at their website for a detailed listing of all available features... We use a bunch of those machines (DL380G3 and DL360G3) for some months now and never had a problem with the hardware or iLO itself. Basically it's a controller which is powered by the standby-power of the system and has a dedicated network interface. You can access a virtual console either via an integrated webserver (which loads a java applet to emulate the console output) or via telnet. It's a little bit slower than the output on your normal screen but you can setup the whole server (BIOS, RAID and OS) without connecting a keyboard or screen at all. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Remote server management
Am Sa, den 07.02.2004 schrieb Micah Anderson um 00:26: Since we often have limited physical access to our machines, and our collective members are spread around the country, our holy grail is remote hardware administration. This could mean a lot of things. Mostly, we just need to: 1. power cycle computers remotely 2. access the bios and boot menu remotely This and more can all be done with HP/Compaq's iLO (integrated Lights Out) which is included in many newer ProLiant server. Take a look at their website for a detailed listing of all available features... We use a bunch of those machines (DL380G3 and DL360G3) for some months now and never had a problem with the hardware or iLO itself. Basically it's a controller which is powered by the standby-power of the system and has a dedicated network interface. You can access a virtual console either via an integrated webserver (which loads a java applet to emulate the console output) or via telnet. It's a little bit slower than the output on your normal screen but you can setup the whole server (BIOS, RAID and OS) without connecting a keyboard or screen at all. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: debian on HP proliant
Am Fr, den 16.01.2004 schrieb Francis Tyers um 16:15: We have a load of DL380s/DL360s here, any issues feel free to give me a mail... The onboard 'scsi' controller appears as a block device and not as a scsi device under linux. Though I can confirm that the SmartArray 5 work like a charm with Debian Woody (using the cciss module) the ProLiant DL320 are IDE-Machines which don't have a SCSI controller but a IDE-RAID. I assume it's based on the Fasttrack chipset which works with Debian woody out-of-the-box too. You can access the array via /dev/ataraid/d0, the partitions will be called /dev/ataraid/d0pX. (Maybe worth mentioning: I recently had some problems with never revisions of the Fasttrack TX-2 chipset which doesn't seem to work with the modules in 2.4 kernels. Older controllers from the same series worked just fine, but newer one are not even detected by the kernel) Having said that, the ProLiant ML330 come with an ATA-RAID which is based on an LSI chipset (MegaIDE) which is not supported by Debian - the only driver available is a half GNU, half closed-source driver. Furthermore the drives attached to those IDE-Ports are not accessible as normal IDE devices (i.e. /dev/hda) so you basically get a machine without any usable IDE interface except for one which is attached to the CD-ROM. If you buy one of these machines you'll either have to use a model with SCSI controller or install an extra IDE-controller. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: debian on HP proliant
Am Fr, den 16.01.2004 schrieb Francis Tyers um 16:15: We have a load of DL380s/DL360s here, any issues feel free to give me a mail... The onboard 'scsi' controller appears as a block device and not as a scsi device under linux. Though I can confirm that the SmartArray 5 work like a charm with Debian Woody (using the cciss module) the ProLiant DL320 are IDE-Machines which don't have a SCSI controller but a IDE-RAID. I assume it's based on the Fasttrack chipset which works with Debian woody out-of-the-box too. You can access the array via /dev/ataraid/d0, the partitions will be called /dev/ataraid/d0pX. (Maybe worth mentioning: I recently had some problems with never revisions of the Fasttrack TX-2 chipset which doesn't seem to work with the modules in 2.4 kernels. Older controllers from the same series worked just fine, but newer one are not even detected by the kernel) Having said that, the ProLiant ML330 come with an ATA-RAID which is based on an LSI chipset (MegaIDE) which is not supported by Debian - the only driver available is a half GNU, half closed-source driver. Furthermore the drives attached to those IDE-Ports are not accessible as normal IDE devices (i.e. /dev/hda) so you basically get a machine without any usable IDE interface except for one which is attached to the CD-ROM. If you buy one of these machines you'll either have to use a model with SCSI controller or install an extra IDE-controller. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: How to investigate kernel failure?
Am Son, 2003-10-19 um 08.29 schrieb Arnt Karlsen: ..I saw raid over net somewhere, where? Testing? Sid? I allway keep finding stuff I can use, the next month. ;-) You probably mean DRBD. As far as I remember it's packaged for testing and unstable... best regards Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Creating custom, automated, Debian installs.
Am Mon, 2003-10-20 um 19.25 schrieb Steve Kemp: I think I want to trim down the installer such that I don't have to answer so many questions, and just input basic information like hostname, etc. Any pointers appreciated - I wasn't sure this is the best list but I assume any large ISP has some means of automated install and rollout of server machines. Apologies if this isn't the case .. Not really answering your question about mass-installing Debian but suggesting another solution/approach: Did you take a look at Gibraltar Linux (www.gibraltar.at)? It's a Debian GNU/Linux based Firewall-Distribution which will boot straight off the CD - should be ideal for your application as many encryption tools and kernel-patches are already applied to the stock-ISO. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Creating custom, automated, Debian installs.
Am Mon, 2003-10-20 um 21.56 schrieb Steve Kemp: I'm downloading the ISO now, but I'm a little put of to see that it's going to be a commercial offering. I'm keen to stick to free software especially considering the most important components are going to be free. As far as I know only the Web-interface (which is currently under development and not even in the distribution) will be commercial, everything else of the distribution will stay free. If you don't need the Web-Interface or prefer to use the console to configure the system (as I do) you don't have to pay anything. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Don, 2003-10-09 um 02.50 schrieb Donovan Baarda: Using snapshots to do an incremental backup would be no different to doing any other type of backup using snapshots. It's the same as a normal incremental backup, just with the added guarantee that the filesystem is not changing underneath you as you do it. I guess I haven't described clearly enough what I mean - maybe I have misunderstood the concept (I first heard about this in a speech of a sales rep. from NetApp) I was told that some storage appliances - for example some bigger NetApp - can do Backups using incremental snapshots. This doesn't mean they make a snapshot and than create a backup from it to get consistent data, but use a bunch of snapshots itself. Say the first snapshot is created at 05:00 AM with 10TB data on the filer and the second one one hour later at 06:00 AM the incremental snapshot would backup only those blocks/files/whatever that have changed since then (maybe just a few GB). This allows much faster backups/restores with guaranteed consistency. At least thats how I understood it about a year ago - the concept sounds really nice to me, but neither the filers nor the software wich allows this procedures are cheap so I couldn't play with it yet. Is anyone on the list you uses this features and can tell what they really do or how good they work? From a quick glance at their website I think it's called SnapRestore http://www.netapp.com/products/filer/snaprestore.html Disclaimer: I don't work for NetApp or any of their associates nor did I in the past. I don't even sell their products ;-) best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Die, 2003-10-07 um 22.34 schrieb Rich Puhek: Would LVM snapshots work well enough to do the trick? I haven't played with LVM, so I don't know how long it takes to perform a snapshot... Can LVM do incremental snapshots? You don't want to backup 1TB (for example) of data when only 100GB have changed since the last backup. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Die, 2003-10-07 um 22.07 schrieb Alex Borges: For example, we will use two Dual-P4Xeon 2Gb for the IMAP/POP, same for the SMTP (same kind of server, but another two servers). Depending on what you want to do on the SMTP server (i.e. spamassassin, scanning for viruses, filter, auto-reply, ... ) you may need more boxes to handle the load. Then, the apache (which i am most afraid about) are the ones that spell trouble BIGTIME. This is because php/sm will prove to be the most resource intensive application in the farm (SMTP is simple, IMAP is simple). So we give it three of the same boxen and its own dual pair of LVS. I think the second pair of LVS balancers is overkill. Balancing (even in NAT mode) needs hardly any resources. Use Gbit interfaces if you think you'll get more the 100 Mbit Network I/O... THen, the backend, this will be two failover enabled boxes with postgres and openldap. They will be quad xeon 6GB ram. Isn't a QUAD Xeon just plain overkill? I haven't tested a setup with OpenLDAP, but a Postfix/Courier/MySQL setup will generate simple queries wich any decent server should handle without any problem even at a rate of some thousand per second. All of that, goes to the SAN. The local storage in each server should respond mostly to services cache necesities (a php cache for the apaches perhaps). Think about splitting up the storage into multiple devices. 120k user will generate a lot of I/O on the disks - you'll need a REALLY fast disk-array for that (not bulk transfer but I/O per second). best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Best way to setup a cheap web cluster?
Am Mit, 2003-10-08 um 15.43 schrieb Ryan Nowakowski: Hey folks, I'm trying to setup a cheap debian web cluster using tools from the linux-ha project. We're using heartbeat and mon to monitor services and do the failover. We'd like to setup shared disk space without buying any new hardware. We have three cheap servers in the cluster. We're thinking about using drbd for the shared disk space. As you already said: Use drbd for the shared storage and heartbeat as a cluster manager. That way you'll get an easy to setup failover-cluster completely based on OSS and without any special hardware. BUT this is not a load-balanced cluster, only one machine will handle the load, it won't be spread about both machines. DRBD currently cannot run in active-active (i.e. most filesystems can't and the GFS support is not yet ready). You'll have two machines, one handles the requests - if this machine fails, the second one will take over and resume processing incoming requests, but until then it sits there and does nothing (well, monitoring the primary node - but that doesn't count). Speaking of web-clusters you probably mean a load-balanced HTTP cluster (with HA features). You'll need some sort of loadbalancer (see one of the recent threads about cluster and balancers). Neither DRBD (shared storage) nor heartbeat (cluster-manager) will do this for you. Instead you could use LVS (www.linuxvirtualserver.org) How have others setup web clusters using debian? We're not adverse to backporting packages or using outside apt sources if necessary. Either way, you don't need any backports - almost (?) everything should be packaged for Debian. If you want the latest versions you'll have to compile some sources by yourself though. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Best way to setup a cheap web cluster?
Am Mit, 2003-10-08 um 18.40 schrieb Ryan Nowakowski: Will drbd work using debian woody without any backports or additional packages? I've heard otherwise. It's been a while since the last time I installed DRBD on a pure Woody but after patching the kernel it should work just fine. It's just a kernel module after all. You may need to use a vanilla kernel instead of the Debian kernel (maybe some patches conflict but I doubt it) though. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Mon, 2003-10-06 um 16.03 schrieb Fredrik: Hello fellow sysadmins, I have been aproached about building a rather big mail system handling 500. existing accounts (running today on a windows based product (ick)) with a growth about 50.000 new accounts per year. The services needed is: smtp, pop3, imap4. As mentioned before: Sounds like fun! :-) I recently build a similar mail-cluster with OSS only. Tough it's currently smaller (20k user) it was designed to grow as needed. I guess it could handle 500k user without any modification except additional machines. I have used LVS for about 3y with good results for 30.000 acounts. But this is certainly a bigger project. Should I go for alteon or any other closed product or stick with LVS? I don't see any need for something other than LVS, especially not for a mail-cluster. If you have any doubts about the throughput of your balancer: a) Use Gbit interfaces and direct-routing if that's not enough (tough I doubt it) b) Use multiple MX records, each pointing to a separate set of balancers My main concern is the storage. SAN? Well, I guess that's the only problem you'll have. Either you use some BIG SAN or NAS (NetApp, EMC) or you use your software to distribute it across multiple server. I did the later as it's probably much cheaper (haven't calculated for 500k user, just for 20k to 100k). Here's how: The MX server run Postfix with a MySQL backend for authentication (and spam-filtering, virus-scanning, server based filter, ...). Depending on the users maildir they access different server via NFS to store their Mail. Each of those is equipped with about 1TB disks to store the mails. The POP/IMAP server do the same thing vice versa. Real fun is backing up the data as you have lots of small files across multiple servers which are changing all the time as users access their mail via IMAP or receive something. For 500k user you'll probably want a quite good backup concept too. Using a SAN or NAS approach for storage might be an advantage here. Anyone used supersparrow for source based load balancing? Nope. Don't see a reason why to use it here as LVS does everything (and more) I need for setting up something like this. As hardware planning might go a little in detail (and therefore become OT), feel free to contact me off-list. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Mon, 2003-10-06 um 16.51 schrieb Theodore Knab: Most things seem quite good - anyway a few questions/comments: Since you are familiar with LVS, you should have no problem setting 2 [redundant] LVS systems up. You could balance the load between 10-20 IMAP servers. I would also suggest LVS as I stated in my other posting - use keepalived to get your balancers redundant. But 10-20 IMAP server (each of them dual Xeon)? Seems a little bit much... In my experience the POP/IMAP server are those machines with the least load. The need some I/O so you definitely want to use GBit for accessing the data via network. You might also be able to use the same 2 LVS systems to balance your load between the Web-mail servers. Crude Diagram [Firewall] | | | [LVS1][LVS2] | | [Fiber Only Switch] | Why fiber-only? Gbit copper is MUCH cheaper and as all server are probably side-by-side in a few racks I don't see we you would need fiber interconnects. Estimated Minimums needed for 500,000+ Email Users -- 10 IMAP servers [Courier IMAP 1 [Dual Xeon 1GHz] server /200 active users] w/ XFS filesystem and Debian Stable As said before - 10 seem a little bit much. You can add more server to your pool anyway. As for the FS: XFS/ReiserFS/ext3 shouldn't matter as all files will be stored on a SAN/NAS anyway. 20 Webmail Servers [Squirrel-mail 1 [Dual Xeon 1Ghz] server /100 active users] w/ XFS filesystem and Debian Stable If you want to provide webmail too... 20 Dual-Xeon seem a little much again as probably not all users will use HTTP to access their mail if they can use POP/IMAP instead. 2 Databases Servers for authentication either [Mysql or OpenLDAP] w/ XFS filesystem and Debian Stable Redundant setup or a clustered approach with multiple read-only and one master-server if the DB queries become too intensive. 2-4 MX Gateways running either Exim or Postfix MTA and SPAMD with w/ XFS filesystem and Debian Stable Amivisd Now that's IMHO not enough for 500k user... If you do spam-filtering, virus-scanning and maybe even filtering (trough procmail or something else) on your MX you'll definitely need either some quite powerful machines or some more smaller ones. 2 [Fiber Channel] SAN Volumes for [MAIL storage] redundancy. /Crude Diagram Or NAS or even self-build NFS server. If you want to access your FC SAN from all your MX and POP/IMAP server it could become a little bit expensive if you consider the bunch of FC-HBA and FC-Switches you'll need... Anyway, as I said in my other post: Hardware dimensioning is something which will definitely consume quite a bit of time as you don't want to have any bottlenecks nor spend huge amounts of money on something you don't need. Maybe talk a few hours with someone (real-life, not ML) who has build something similar (not necessarily a mail-cluster, but something with huge amounts of data on a network) as they probably know most of the pitfalls first-hand :-) best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Sugesstions building a rather big mail system.
Am Die, 2003-10-07 um 17.05 schrieb Emmanuel Lacour: What about using localization with ldap and a pop/imap proxy: Users are dispatched on several real pop/imap servers postfix deliver to the correct server according to the ldap entry pop/imap proxies are load balanced and connect to the right server according to the ldap entry for that user. Yes, also possible and also a nice approach, but I think the LVS/central-storag(es) is the easier solution and I've already deployed setups like the one I described - that's why I mentioned it. I've read about perdition (http://www.vergenet.net/linux/perdition/) when Russel Coker mentioned it on the list some time ago but couldn't find time to give it a try on a testbed (yet). Like this you avoid a central storage. If one pop/imap server crash, it affects only users on this server. Each pop/imap server need to have RAID and backups ;-) Well, that's not much different from my NFS approach: If one of your storage server crashes, only those users are affected. You'll probably want to use a bunch of smaller storage-servers (think 0,5-1TB) with fast U320 15k disks anyway as you'll get quite a bit of I/O and get a good distribution of data across your storage-network as a bonus. You can saturate even a RAID5 (with U320/15k) quite easily with the I/O a mail server usually generates (i.e. LOTS of small files) Backup is something you'll definitely want to take a closer look at as 500k user will generate enough data to keep larger libraries busy for hours (don't forget the restore procedure may take a long time too). Incremental FS snapshots would be cool for this, but I don't know of any way how to do this with Linux. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Fri, 2003-09-19 at 16:41, Jeremy Zawodny wrote: On Thu, Sep 18, 2003 at 06:38:44PM +0200, S?bastien Lefebvre wrote: You might want to use keepalived which includes a vrrp implementation. I'm running it on the clusters I set up : http://keepalived.sourceforge.net/ I even use it on Netfilter firewalls without any trouble (without the LVS support) Are there any good docs or howtos that describe how to do that? Setting up two web servers with vrrp/keepalived should be easy, but everything I looked at seemed intimately tied to LVS. Did you take a look at the keepalived documentation? http://keepalived.sourceforge.net/documentation.html All you have to do is patch your kernel with LVS or use the appropriate netfilter-ipvs-modules, compile and install keepalived and configure it according to the documentation and/or your special requirements. Now you can (should) test all possible failover scenarios with your balancer-cluster and check if the real-server are added and removed from the pool correctly. A web server itself doesn't need any special configuration at all (well, maybe a little routing/firewalling if you choose to use direct-routing or tunneling instead of NAT behind your balancer) and can be integrated in the cluster within a few minutes. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Fri, 2003-09-19 at 19:58, Jeremy Zawodny wrote: Well there's the confusing part. You had said: I even use it on Netfilter firewalls without any trouble (without the LVS support). It's the 'without the LVS support' that caught my eye. Yes, you can use keepalive without LVS (just the VRRP part) since some months... The docs didn't make it clear that I could do any of this without LVS-related kernel patches. Further backing that, you now say: All you have to do is patch your kernel with LVS or use the appropriate netfilter-ipvs-modules, compile and install keepalived and configure it according to the documentation and/or your special requirements. So I guess I've either misunderstood or asked the wrong question(s). Because the documentation all seems to revolve around LVS implementations. It's not clear which pieces are optional--unless I'm interpreting it incorrectly. That's because keepalived was first written as a management-program for your LVS server pools. Later the VRRP part was introduced to allow redundant balancers without the need for additional programs like heartbeat. As far as I remember it's possible to compile keepalived without the LVS (ipvs) part if you just need VRRP. Because the thread started with Apache clustering and you said something about two web servers I assumed you wanted redundant balancers. VRRP (Virtual Router Redundancy Protocol) is indented for router redundancy (and firewalls, balancer, ...), not necessarily for redundant (application) server. For setting up a failover cluster (i.e. two machines, active/standby - for redundant - but not balanced - Apache, MySQL, Samba, ... ) you might want to take a look at heartbeat, piranha, failsafe or something like that. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Wed, 2003-09-17 at 20:52, Shri Shrikumar wrote: Thanks for the response. Let me just clarify. If I have two boxes, I can configure both of them to be webservers and one of them to be the lvs node. I dont need a third machine to be a dedicated node. Is this correct ? No, I don't think this would work. You'll need a third box which will do the balancing (well, maybe you could get it to work but it's not intended this way). As I said before, the balancer doesn't have to be a fast machine - almost anything you can find will be sufficient. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Thu, 2003-09-18 at 17:44, Jason Lim wrote: Strangely enough, you might find FreeBSD (or one of the BSDs) working better as the forwarded than Linux, due to it's better ability to handle many multiple concurrent connections. YMMV of course. Is the balancer-functionality build into the FreeBSD kernel like LVS? How does *BSD handle it? Any URL? best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
RE: Apache clustering w/ load balancing and failover
On Wed, 2003-09-17 at 12:07, Javier Castillo Alcibar wrote: By the way, what filysystem do you recomend for these kind of clusters?? NFS?? Coda?? Depends on what you want to do - for instance: Build a balanced server farm to handle a lot of traffic: Just use a NFS server as centralized storage for your document root and let all cluster-nodes access it. Your balancer(s) can handle the HA part and manage your server-pool. Your NFS server is your SPOF though if it's not a cluster itself. Build a (two node) failover cluster: Take a look at DRBD - it's a redundant network block device. You can use almost any filesystem on top of it. Preferably journaling of course. best regards Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Wed, 2003-09-17 at 15:00, Shri Shrikumar wrote: Looking at the documentation for LVS, it mentions that it needs two nodes, a primary node and a backup node which then feeds into n real servers. Actually I never saw this mentioned in the documentation - I haven't looked at it for quite some time now, tough. LVS definitely works with ONE machine which acts as the loadbalancer. You can use a second machine for failover if you need the redundancy, but as far as I know, LVS can't handle this by itself so you would have to use keepalived or heartbeat for that. The balancer hardly needs any resources - if it wasn't for the quality of the hardware (i.e. you don't want to see your balancer die and take the whole farm offline because of some el cheapo motherboard) you could use any old Pentium lying around to handle quite a bit of traffic. Even the cheapest Celeron rackserver can probably handle some hundred Megabit throughput... To sum it up: You take some machine which will act as a loadbalancer and distributes the HTTP (SMTP/POP/...) requests to you pool of real-server. To achieve this, patch your kernel or load the ipvs modules. Define a service and add real-servers... If you build some high-performance and/or high-availability farm with this setup you should also consider some other things (i.e. planing the cluster environment so you don't run into bottlenecks later), but for a first test-setup you could probably start right away... If you have further questions, we can discuss details off-list as I may become OT. best regards, Markus Oswald -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Apache clustering w/ load balancing and failover
On Wed, 2003-09-17 at 12:05, Joost Veldkamp wrote: You can also have a look at www.ultramonkey.org , deb packages avaialble. Simplifies the installation of LVS a lot. Recently, there was a article in Sysadmin mag. about clustering. There was an interesting part about openSSI, it can be found here: http://www.samag.com/documents/s=8817/sam0313b/0313b.htm I didn't read trough the whole article, but openSSI seems to do the clustering at process-level (somewhat like Mosix). If this is the case: Technically you could probably run a webserver on top of such a cluster, but I doubt it would be a good idea as it will probably have quite a bit overhead which doesn't seem necessary for a Apache cluster. In the end the cluster would either need some really beefy hardware (especially network for the I/O I guess) and/or won't deliver the performance you would expect. A dedicated loadbalancer is probably the better solution as it doesn't add much overhead - its only job is to distribute incoming requests. Anyway: please correct me if I'm wrong! ;o) best regards Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Woody with Intel S875WP1-E board? - OT
On Fri, 2003-09-12 at 14:54, Theodore J. Knab wrote: What kernel is Red Hat Linux 8.0 using. Seeing you are simply trying to get a board to work this is more of kernel issue than a distribution issue. If you were using something evil like Cold Fusion, it might be a distribution issue. Of course, all distribution issues can be worked around with symbolic links and the proper libraries. Slightly OT: Just wanted to mention that although Coldfusion is evil (and I second that ;o) all recent releases run just fine on Debian Woody out of the box. We had to install it for some customers and there where no problems whatsoever, neither when installing nor running it in production. best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: Ethernet on a Compaq Proliant DL360 G3
On Tue, 2003-08-05 at 17:53, Tomàs Núñez Lirola wrote: I can read in the specifications that the card is Dual, NC7781 PCI-X Gigabit 10/100/1000, but I've not found anything like this in the kernel. A nearby machine (also a dl360 g3) has RH installed, and it use Tigon3 driver (tg3), and work properly. I tried the same kernel module, but altough it loads up without any error (modprobe tg3, or adding it to /etc/modules), and it lets me configure the card (ifconfig goes ok), after that I can not ping any host (network is not accesible). I mean, everything seems to be ok, but there is no result. Has everyone installed Debian on a Compaq Proliant DL360 G3? What ethernet card did you configure? Not DL360G3 but a couple DL380G3 which are almost identical, just a little bit bigger due to their 6 drive-bays instead of 2. I assume the Ethernet chipset is exactly the same as on a DL360G3. The tg3 driver worked just fine for me right out-of-the-box without any problems. Stupid question: Did you maybe confuse the interfaces? Try to listen with 'tcpdump' on one interface and see if there are ANY packets incoming. Even with wrong IP-addresses or routing you should see traffic from others hosts on the same network (at least their broadcasts)... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: apt-get
On Fri, 2003-07-04 at 09:20, Craig wrote: Hi Guys How do I setup dpkg/apt-get to hold back on a specific package when doing an apt-get upgrade ? Take a look at apt_preferences(5) From the manpage: [...] VERSIONING One purpose of the preferences file is to let the user select which version of a package will be installed. This selection can be made in a number of ways that fall into three categories, version, release and origin. [...] HTH Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: apt-get
On Fri, 2003-07-04 at 09:20, Craig wrote: Hi Guys How do I setup dpkg/apt-get to hold back on a specific package when doing an apt-get upgrade ? Take a look at apt_preferences(5) From the manpage: [...] VERSIONING One purpose of the preferences file is to let the user select which version of a package will be installed. This selection can be made in a number of ways that fall into three categories, version, release and origin. [...] HTH Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: loadbalancing
On Wed, 2003-06-11 at 16:15, Theodore Knab wrote: Will the Linux Virtual Server keep track of sessions for something like Squirrel-mail on Apache ? You can set a --persistent flag to achieve a similar behaviour. From the man page of ipvsadm (from the IPVS patch): -p, --persistent [timeout] Specify that a virtual service is persistent. If this option is specified, multiple requests from a client are redirected to the same real server selected for the first request. Optionally, the timeout of persistent sessions may be specified given in seconds, otherwise the default of 300 seconds will be used. This option may be used in conjunction with protocols such as SSL or FTP where it is important that clients consistently connect with the same real server. I want use it to provide higher than 99.25% availability. ;-) Shouldn't be too hard... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: realtime email backup across computer centers
On Tue, 2003-05-13 at 10:39, Stephan Poehlsen wrote: Hi, How would you realize a realtime email-backup across two different computers in two different computer-centers? Let's say I have a mail-server A in city A and a backup-mail-server B in city B. So if an airplain crashs one computer-center, no email gets lost. I think all mail must be forwarded from server A to B (and must be acknowledged from B) before server A acknowleges incomming mail. You could use DRBD to have your spool-directory mirrored. If machine A goes down, B mounts the DRBD device and takes over the IP address of your mail-service. Voila, you're up again. Downtime: a few seconds... The only real problem I see: If you really want to distribute your servers across two cities (!), you'll need some really good (i.e. stable) connection between both servers or you may face a split-brain situation. A fast Uplink may be nice to as you have to re-sync your whole device every time one of your servers go down. Typically DRBD has some Gbit interconnect through a crossover cable to prevent such a situation due to a switch-failure or something else and provide reasonable fast re-sync times. Does there exists a solution? Maybe with qmail? The scenario above is completely MTA independent - you can use qmail, sendmail, postfix, exim, courier-mta, $whatever... best regards, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development
Re: NON-US can anyone reach aljazeera.net?
On Tue, 2003-03-25 at 23:49, [EMAIL PROTECTED] wrote: Can anyone reach aljazeera.net or english.aljazeera.net from outside of US? Or any nameservers for it? I'm trying to determine if this is a US only issue, ahem. As this seems to be a large-scale outage/attack a group of independent people from (mainly) Germany are trying to bring a mirror in Europe online as soon as possible to provide access to some less biased news-sources than CNN. Just in case: The group is in no way associated with Al'Jazeera or any government as far as I know. A temporary mailing-list is located at http://jazml.snafu.at/cgi-bin/mailman/listinfo/jazeera Traffic is at this time mostly German but most people are willing to switch to English in case there is more international Traffic. best regards, Markus -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: NON-US can anyone reach aljazeera.net?
On Tue, 2003-03-25 at 23:49, [EMAIL PROTECTED] wrote: Can anyone reach aljazeera.net or english.aljazeera.net from outside of US? Or any nameservers for it? I'm trying to determine if this is a US only issue, ahem. As this seems to be a large-scale outage/attack a group of independent people from (mainly) Germany are trying to bring a mirror in Europe online as soon as possible to provide access to some less biased news-sources than CNN. Just in case: The group is in no way associated with Al'Jazeera or any government as far as I know. A temporary mailing-list is located at http://jazml.snafu.at/cgi-bin/mailman/listinfo/jazeera Traffic is at this time mostly German but most people are willing to switch to English in case there is more international Traffic. best regards, Markus
Re: Re Lilo
On Wed, 2002-11-27 at 05:30, Brad Lay wrote: I'm sure theres a debian-specific way, but this way works ;) Of course there is a debian-way of doing this ;o) man mkboot best regards -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: IPSEC and PPTP
On Fri, 2002-07-19 at 16:17, Grischa Schuering wrote: I am using debian woody. What is the best and easiest way to get a IPSEC tunnel (encrypted e.g. 3DES) running between two seperate networks ?? I was reading something about freeswan? Also a couple months ago, I saw a package PIPSECD which no longer exists? So what is the easiest an best way to get it running ? The last time I did a IPSEC setup I used FreeS/WAN and got it up and running in some hours. As far as I remember the setup process was quite straightforward although you have to patch and recompile your kernel (which isn't a problem if you know what you're doing). Just take a look at the documentation at http://www.freeswan.org/freeswan_trees/freeswan-1.95/doc/index.html which is quite good. If you still have problems, just drop me a line via PM and i'll try to help you as good as I can. HTH Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: multiple auth @samehost
On Mon, 2002-06-17 at 01:42, Marum wrote: I'm tring to put two domains on the same mail server, but i can't change the login if it alredy exist in the previous domain. Take a look at vmailmgr (www.vmailmgr.org) I use it in conjunction with qmail and courier-imap on several servers without any significant problems. HTH, Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415\ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: cold fusion 4.5 on Debian
On Fri, 2002-03-22 at 14:47, Thedore Knab wrote: Is anyone running Cold Fusion 4.5 on Debian ? Not 4.5, but I installed Coldfusion 5 without any problems on a bunch of Debian (woody) boxes for some ouf our customers. I guess 4.5 will work just fine too... BTW: You don't have to use alien and install some strange RPMS - just get the native packages via 'apt-get install libstdc++2.9-glibc2.1' Are there any other simple packages that I might recommend as a dummy proof alternative ? PHP4? regards Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415 || +43 664 4865125 \ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]
Re: RAID starter
On Thu, 2002-03-21 at 08:00, Angus D Madden wrote: Russel, would you recommend software RAID with a production system? Have you tried it? Curious. I wonder as well if anyone has tried the new IDE-RAID controllers. I installed a system with a Promise Fasttrack IDE-RAID controller a few weeks ago and after compiling a fresh 2.4.17 kernel it worked just fine. Though I wouldn't recommend it for a production system as the RAID has to be resynced at BIOS-stage as soon as a disk fails. So if you use a pair of 100GB drives it will take some hours to get the system up again... mfg Markus -- Markus Oswald [EMAIL PROTECTED] \ Unix and Network Administration Graz, AUSTRIA \ High Availability / Cluster Mobile: +43 676 6485415 || +43 664 4865125 \ System Consulting Fax:+43 316 428896 \ Web Development -- To UNSUBSCRIBE, email to [EMAIL PROTECTED] with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]