RE: Debian in Server Farm

2004-04-01 Thread Michael Bellears
Pigeon wrote:
 On Thu, Apr 01, 2004 at 08:24:36AM +1000, Michael Bellears wrote:
 
 [Michael, can you please leave in attribution lines, and perhaps fix
 your mailer so that it supports References: or
 In-Reply-To: headers to support other threading-aware mailers?]
 
 Lookout does not appear to have this feature - I'm sure I have seen
 an add-on though...
 
 It certainly supports attributions; the format is
 distinctively horrible, but it does do it.
 
 Looking at traffic from other lists it appears that it also
 supports References:, though possibly not In-Reply-To:.
 
 Unfortunately, I can't tell you how to turn this on!

Found the add-ons if anyone is interested:

http://home.in.tum.de/~jain/software/oe-quotefix/
http://home.in.tum.de/~jain/software/outlook-quotefix/



Re: Debian in Server Farm

2004-04-01 Thread Pigeon
On Thu, Apr 01, 2004 at 08:24:36AM +1000, Michael Bellears wrote:
  
  [Michael, can you please leave in attribution lines, and 
  perhaps fix your mailer so that it supports References: or 
  In-Reply-To: headers to support other threading-aware mailers?]
 
 Lookout does not appear to have this feature - I'm sure I have seen an
 add-on though...

It certainly supports attributions; the format is distinctively
horrible, but it does do it.

Looking at traffic from other lists it appears that it also supports
References:, though possibly not In-Reply-To:.

Unfortunately, I can't tell you how to turn this on!

-- 
Pigeon

Be kind to pigeons
Get my GPG key here: http://pgp.mit.edu:11371/pks/lookup?op=getsearch=0x21C61F7F


pgp0.pgp
Description: PGP signature


Re: Debian in Server Farm

2004-03-31 Thread Adrian 'Dagurashibanipal' von Bidder
[Michael, can you please leave in attribution lines, and perhaps fix your 
mailer so that it supports References: or In-Reply-To: headers to support 
other threading-aware mailers?]

On Wednesday 31 March 2004 01.14, Michael Bellears wrote:
 Steve:

  I don't have a lot of experience with this but I would
  configure syslogd to send logging info to a master log
  server. I think it is clear that which host they came from
  in this configuration.

 Agreed.

 qmail logs will be my only issue - but I will ask on the qmail list.

An excellent reason to drop qmail, isn't it ;-)
[no, I won't discuss this here]

qmail can log to syslog, can't it? I'm sure it did on the machines where I 
saw it.

cheers
-- vbi

-- 
featured product: the KDE desktop - http://kde.org


pgp0.pgp
Description: signature


Re: Debian in Server Farm

2004-03-31 Thread Adam Aube
Adrian 'Dagurashibanipal' von Bidder wrote:

 qmail can log to syslog, can't it? I'm sure it did on the machines where I
 saw it.

The recommended install (Life with qmail) has qmail logging through multilog
(part of the daemontools package), but yes, it can log through syslog.

Adam


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



RE: Debian in Server Farm

2004-03-31 Thread Michael Bellears
 
 The recommended install (Life with qmail) has qmail logging 
 through multilog (part of the daemontools package), but yes, 
 it can log through syslog.
 

Came across this if anyones interested: The following describes how to
log to remote host using multilog and tcpclient

http://smarden.org/socklog/network.html

Regards,
MB



RE: Debian in Server Farm

2004-03-31 Thread Michael Bellears
 
 [Michael, can you please leave in attribution lines, and 
 perhaps fix your mailer so that it supports References: or 
 In-Reply-To: headers to support other threading-aware mailers?]

Lookout does not appear to have this feature - I'm sure I have seen an
add-on though...

MB



Re: Debian in Server Farm

2004-03-30 Thread Adrian 'Dagurashibanipal' von Bidder
On Tuesday 30 March 2004 07.04, Michael Bellears wrote:

 Would appreciate anyone's experiences/recommendations on the
 following points:

 1. What is the recommended method to synch config files on all real
 servers (Eg. Httpd.conf, horde/imp config files etc?) - Have only one
 server that admins connect to for mods, then rsync any changes to the
 other servers?

Would be one way. Another way is to set up a cvs or subversion 
repository and use this to distribute the config file. Has the 
additional bonus of being able to trace back how the config file was 
changed.


 2. What about logfiles - We would have all users mail etc on an NFS
 share - Can you do the same for logfiles?(Or do you get locking
 issues?) - From a statistical aspect, it would be a pain to have to
 collaborate each real servers logfiles, then run analysis. Also
 from a support perspective - How are support personnel supposed to
 know which real server a client would actually be connecting to in
 order to see if they are entering a wrong username/pass etc?

Use a dedicated server for logging and have all servers send the log to 
that over the network.


 3. Imaging of Servers[...]

No idea.

http://infrastructures.org is a very useful web site about 
administrating machines that are supposed to stay the same.

cheers
-- vbi

-- 
featured link: http://fortytwo.ch/gpg/intro


pgp0.pgp
Description: signature


Re: Debian in Server Farm

2004-03-30 Thread Steve Witt
On Tue, 30 Mar 2004, Michael Bellears wrote:

 We are in the process of migrating an overburdened Debian
 3.0/Apache/qmail box into a webfarm setup.

 Looking at using a ServerIronXL for loadbalancing.

 Would appreciate anyone's experiences/recommendations on the following
 points:

 1. What is the recommended method to synch config files on all real
 servers (Eg. Httpd.conf, horde/imp config files etc?) - Have only one
 server that admins connect to for mods, then rsync any changes to the
 other servers?

I asked a similar question a few months ago and someone suggested
'cfengine'. I started using it and, after a bit of learning curve, I have
probably 30 machines (Debian woody) being managed automatically by it. It
works great. I think the version in woody is old, so I got it from the
upstream site. Basically you can store configuration files and other
actions on a master server. Then you can cause (through cron, for
example) each client machine to be updated with current config files and
other actions. These files can be scripts, so essentially you can do
pretty much whatever you want to do.

For example, I have a list of the Debian packages that should be present
as one of the config files that gets transferred to each machine when
cfengine runs on the master. There is another script that runs on each
machine (also controlled by cfengine)  that sets this new list of packages
(dpkg --set-selections) and then runs apt-get update/upgrade, etc. So to
add a package to my machines I just edit the one package file on the
master and then the clients get update either when cfengine runs through
cron (once a day for me) or you could run it manually at that time if you
needed the update sooner. It works really well.


 2. What about logfiles - We would have all users mail etc on an NFS
 share - Can you do the same for logfiles?(Or do you get locking issues?)
 - From a statistical aspect, it would be a pain to have to collaborate
 each real servers logfiles, then run analysis. Also from a support
 perspective - How are support personnel supposed to know which real
 server a client would actually be connecting to in order to see if they
 are entering a wrong username/pass etc?

I don't have a lot of experience with this but I would configure syslogd
to send logging info to a master log server. I think it is clear that
which host they came from in this configuration.


 3. Imaging of Servers - I have looked at SystemImager
 http://www.systemimager.org/, and it looks to do exactly what I want
 (i.e. be able to create a bootable CD from our SOE for deployment of new
 serverfarm boxes, or quick recovery from failure) - Can anyone provide
 feedback as to it's effectiveness?

I am still struggling with systemimager. The machines I want to image have
gigabit Ethernet devices that require a newer kernel than was available
when I first tried it (about 2 months ago). I didn't have the time to get
it working, but I don't think it was its fault. I had trouble getting a
new kernel compiled with the new Ethernet driver and ran out of time.
Hopefully I can get back to it, because it does seem like exactly the
right tool for the job.


-- 
To UNSUBSCRIBE, email to [EMAIL PROTECTED] 
with a subject of unsubscribe. Trouble? Contact [EMAIL PROTECTED]



Re: Debian in Server Farm

2004-03-30 Thread Mark Ferlatte
Michael Bellears said on Tue, Mar 30, 2004 at 03:04:58PM +1000:
 1. What is the recommended method to synch config files on all real
 servers (Eg. Httpd.conf, horde/imp config files etc?) - Have only one
 server that admins connect to for mods, then rsync any changes to the
 other servers?
 
Not sure about recommended, but cfengine is pretty good.  You can go a long
way with CVS + cvsup, too, but I think the best solution is to put configs in
CVS/subversion, and use cfengine to handle deploying new versions of configs.

 2. What about logfiles - We would have all users mail etc on an NFS
 share - Can you do the same for logfiles?(Or do you get locking issues?)

logserver, as mentioned before.

 3. Imaging of Servers - I have looked at SystemImager
 http://www.systemimager.org/, and it looks to do exactly what I want
 (i.e. be able to create a bootable CD from our SOE for deployment of new
 serverfarm boxes, or quick recovery from failure) - Can anyone provide
 feedback as to it's effectiveness?

I love it.  I've got a cluster of about 200 machines that I manage using
systemimager.  If you're installing onto newer hardware, you will almost
certainly have to build a custom si kernel to add newer drivers for ethernet
cards or RAID cards.  It's not particular difficult, just a tad time consuming.

Notes below are how to add a newer e1000 driver to the systemimager kernel; it
may help.

M

mkdir ~/src/systemimager
cd ~/src/systemimager
apt-get source systemimager # ~ 45MB
cd systemimager-3.0.1
tar -xjf systemimager-3.0.1.tar.bz2
cd systemimager-3.0.1
make patched_kernel-stamp

There is now a systemimager kernel source tree in src/linux-2.4.20

Download the e1000 source from Intel:
http://support.intel.com/support/network/sb/cs-006120-prd38.htm

mkdir ~/src/e1000
cd ~/src/e1000
tar -xzf e1000.tar.gz
cd e1000/src
Edit Makefile:
Set KSP = ~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/src/linux-2.4.20
make
cp e1000.o 
~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/initrd_source/my_modules
cd ~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/initrd_source/my_modules
Edit INSMOD_COMMAND
Add nsmod ./e1000.o'
cd ~/src/systemimager/systemimager-3.0.1
Edit FLAVOR
Change it to something else (like e1000) for testing.
make binaries
Go get lunch... this takes a _long_ time.
sudo make install_binaries
This puts the binaries into /usr/share/systemimager/boot/i386/e1000

Now copy kernel, config, and initrd.img from e1000 to the master standard
image, and to the master /tftpboot.



pgp0.pgp
Description: PGP signature


RE: Debian in Server Farm

2004-03-30 Thread Michael Bellears
Another way is to set up a cvs or 
 subversion repository and use this to distribute the config 
 file. Has the additional bonus of being able to trace back 
 how the config file was changed.

Like it!

Couple of other responses also suggested cfengine (And combining cvs
with cfengine) - So I'll definitely look into both.

 
 
  2. What about logfiles - We would have all users mail etc on an NFS 
  share - Can you do the same for logfiles?(Or do you get locking
  issues?) - From a statistical aspect, it would be a pain to have to 
  collaborate each real servers logfiles, then run 
 analysis. Also from 
  a support perspective - How are support personnel supposed to know 
  which real server a client would actually be connecting 
 to in order 
  to see if they are entering a wrong username/pass etc?
 
 Use a dedicated server for logging and have all servers send 
 the log to that over the network.


Can do that with Apache/horde - but qmail cannot(?) when using
multilog..might ask on the qmail list.
 
 
  3. Imaging of Servers[...]
 
 No idea.
 
 http://infrastructures.org is a very useful web site about 
 administrating machines that are supposed to stay the same.

Ta - quite a bit of reading there!

Regards,
MB
 



RE: Debian in Server Farm

2004-03-30 Thread Michael Bellears
  1. What is the recommended method to synch config files on 
 all real
  servers (Eg. Httpd.conf, horde/imp config files etc?) - 
 Have only one 
  server that admins connect to for mods, then rsync any 
 changes to the 
  other servers?
 
 I asked a similar question a few months ago and someone 
 suggested 'cfengine'. I started using it and, after a bit of 
 learning curve, I have probably 30 machines (Debian woody) 
 being managed automatically by it. It works great. I think 
 the version in woody is old, so I got it from the upstream 
 site. Basically you can store configuration files and other 
 actions on a master server. Then you can cause (through cron, for
 example) each client machine to be updated with current 
 config files and other actions. These files can be scripts, 
 so essentially you can do pretty much whatever you want to do.
 
 For example, I have a list of the Debian packages that should 
 be present as one of the config files that gets transferred 
 to each machine when cfengine runs on the master. There is 
 another script that runs on each machine (also controlled by 
 cfengine)  that sets this new list of packages (dpkg 
 --set-selections) and then runs apt-get update/upgrade, etc. 
 So to add a package to my machines I just edit the one 
 package file on the master and then the clients get update 
 either when cfengine runs through cron (once a day for me) or 
 you could run it manually at that time if you needed the 
 update sooner. It works really well.

Thanks for the info - cfengine looks excellent!

 
 
  2. What about logfiles - We would have all users mail etc on an NFS 
  share - Can you do the same for logfiles?(Or do you get locking 
  issues?)
  - From a statistical aspect, it would be a pain to have to 
 collaborate 
  each real servers logfiles, then run analysis. Also from 
 a support 
  perspective - How are support personnel supposed to know 
 which real
  server a client would actually be connecting to in order to see if 
  they are entering a wrong username/pass etc?
 
 I don't have a lot of experience with this but I would 
 configure syslogd to send logging info to a master log 
 server. I think it is clear that which host they came from 
 in this configuration.

Agreed.

qmail logs will be my only issue - but I will ask on the qmail list. 

Regards,
MB



RE: Debian in Server Farm

2004-03-30 Thread Michael Bellears
  
 Not sure about recommended, but cfengine is pretty good.  
 You can go a long way with CVS + cvsup, too, but I think the 
 best solution is to put configs in CVS/subversion, and use 
 cfengine to handle deploying new versions of configs.

Like that suggestion a lot! - Thanks.

  3. Imaging of Servers - I have looked at SystemImager 
  http://www.systemimager.org/, and it looks to do exactly 
 what I want 
  (i.e. be able to create a bootable CD from our SOE for 
 deployment of 
  new serverfarm boxes, or quick recovery from failure) - Can anyone 
  provide feedback as to it's effectiveness?
 
 I love it.  I've got a cluster of about 200 machines that I 
 manage using systemimager.  If you're installing onto newer 
 hardware, you will almost certainly have to build a custom si 
 kernel to add newer drivers for ethernet cards or RAID cards. 
  It's not particular difficult, just a tad time consuming.
 
 Notes below are how to add a newer e1000 driver to the 
 systemimager kernel; it may help.
 
 M
 
 mkdir ~/src/systemimager
 cd ~/src/systemimager
 apt-get source systemimager # ~ 45MB
 cd systemimager-3.0.1
 tar -xjf systemimager-3.0.1.tar.bz2
 cd systemimager-3.0.1
 make patched_kernel-stamp
 
 There is now a systemimager kernel source tree in src/linux-2.4.20
 
 Download the e1000 source from Intel:
 http://support.intel.com/support/network/sb/cs-006120-prd38.htm
 
 mkdir ~/src/e1000
 cd ~/src/e1000
 tar -xzf e1000.tar.gz
 cd e1000/src
 Edit Makefile:
 Set KSP = 
 ~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/src/l
 inux-2.4.20
 make
 cp e1000.o 
 ~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/initr
 d_source/my_modules
 cd 
 ~/src/systemimager/systemimager-3.0.1/systemimager-3.0.1/initr
 d_source/my_modules
 Edit INSMOD_COMMAND
 Add nsmod ./e1000.o'
 cd ~/src/systemimager/systemimager-3.0.1
 Edit FLAVOR
 Change it to something else (like e1000) for testing.
 make binaries
 Go get lunch... this takes a _long_ time.
 sudo make install_binaries
 This puts the binaries into 
 /usr/share/systemimager/boot/i386/e1000
 
 Now copy kernel, config, and initrd.img from e1000 to the 
 master standard
 image, and to the master /tftpboot.

Thanks HEAPS for the info on systemimager! - Going to set it up on some
test servers today.

Regards,
MB