Hi All, 

I'm sure this is a question has has come up before, however I can't
find a reference to it at the moment so I thought I'd ask again!

I'm in the process of building a web-cluster based on Linux HA.

At the moment, there is a single server that hosts over 200 sites.  We
are moving to an architecture that has two director servers running
ldirectord using least-connected load balancing for http and https
traffic to a number of web nodes running a slightly modified LAMP stack.

At the moment in our labs, we are relying on a custom-written script to
push the content of /home/www-data/ (where all the virtual hosts lie)
from the first node (and primary FTP point) to the other nodes in the
cluster using scp and ssh to reload the apache2 configs.  There has to
be a better way to do this!

My concern is that when we come to implement this, we will need to
integrate the currently running webserver as a node in the cluster with
_minimal_ downtime (if we can do it with zero d/t that would be great!)

What I am after is a way of syncing the files that can be installed on
all nodes _without_ the need to reformat the partitions on the current
webserver.

I've looked at OCFS, GFS and Lustre however as I'm starting to get out
of my depth on this, I'd really appreciate some input from the
community!

Thanks as always,

Matt
-- 
Matthew Macdonald-Wallace
[EMAIL PROTECTED]
http://www.truthisfreedom.org.uk
_______________________________________________
Linux-HA mailing list
[email protected]
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to