On Tuesday 15 September 2009 18:30:01 John Cooper wrote:
> On 15/09/09 17:15, Chris Simmonds wrote:
> > Hi,
> >
> > I have a situation where I need to keep data on several PCs on a LAN in
> > sync. Any PC may update the data, with suitable locking, which must be
> > pushed out to all the others. It must be possible for a PC to go down
> > and be brought back on line again without impacting the others. The
> > amount of data is not large - say a few thousand items - and the
> > population of PCs is also modest - maybe 50 of them. All will be running
> > some version of Red Hat Linux.
> >
> > Has anyone worked on something along these lines?
> >
> > One option I have considered is using, say, MySQL with one master node
> > replicating to all the others and some mechanism to elect a new master
> > if the original went down. But, that sounds messy. There must be a
> > neater solution?
> >
> > Bye for now,
> > Chris.
> 
> I don't know the type of data you are talking about but have you had a
> look at Git (Linus Torvalds's versioning software)
> 
> http://www.kernel.org/pub/software/scm/git/docs/v1.2.6/tutorial.html
> 

This REALLY depends on the nature of the data you're managing (is it files, 
individual records or something else?).

I will assume that you're looking at a MySQL-type database solution.

Let's consider the availability requirement... If for 50 PCs you have a server 
and the availability of each PC depends on this server, it doesn't seem to 
make much sense to create a database system that is still available after the 
server fails. Generally, in this case, the database runs on the server, and 
there may be either a simple backup by dumping the database, or having a slave 
database node. 

If these PCs are NOT dependent on a single server, then we have a situation 
that is analagous to the internet, where a cluster of database servers provide 
data for many clients. Here using master servers and slave servers works well.

However, if there isn't a server, and at any one time 75% of the PCs are 
switched on,  then we have a very different situation, where macjines must 
propogate changes around the network and queue changes for hosts that are not 
up. This is potentially highly available (as you basically have a copy of the 
database on the local PC, but it is going to be quite tricky to administrate.

You might like to consider your disaster recovery procedures as well. How long 
would it ACTUALLY take to return the database to it's previous state if a 
drive or motherboard failed. Do you image the server system disk on a weekly 
basis and backup your database to an external disk for instance.

There are, of course, ways of making the hardware it runs on more resiliant to 
failure, by running RAID arrays, multiple ethernet cards, better cooling etc.

I have always found that the simplest systems are the easiest to KEEP 
available. Hope this helps,

Tim B,

-- 
OpenPilot - Open-source Marine Chart Plotter
Lead Developer
http://openpilot.sourceforge.net

-- 
Please post to: Hampshire@mailman.lug.org.uk
Web Interface: https://mailman.lug.org.uk/mailman/listinfo/hampshire
LUG URL: http://www.hantslug.org.uk
--------------------------------------------------------------

Reply via email to