On 10/09/2007 10:43 AM, Flaherty, Patrick wrote:
> I'm planning to set up an HA mysql cluster. The database serves as a
> backend to a set of webservers (HW loadbalanced). The DB has light load,
> but when it breaks the site breaks, so I can't really get away with it
> as a single point of failure.
>
> So here were my options:
> http://dev.mysql.com/doc/refman/5.0/en/ha-overview.html
>
> Replication - One master server accepts writes, on write ships it's logs
> to the slave server(s). Async may not be a problem, but seems silly
> there's no flag to wait for the slaves to report a write was successful.
> DRBD - Write all data onto a shared network block device. Use heartbeat
> to determine which server should be running mysql which lives on that
> shared block device. Use a cross overcable to prevent strange network
> issues.
> Cluster - Needs at least for nodes. Far to many for this setup.
>
> I think I've settled on the DRBD method. Using a network block device
> and failing back and forth using heartbeat and a floating ip, though log
> shipping seems pretty straightforward.
>
> Does anyone have any positive or negative feedback on any of the
> methods?
>   
I'm using DRBD and heartbeat to do HA MySQL.  We've just moved our 
development databases over and will be moving production in a few weeks.

We went this way over replication (master-master) as I was able to get 
replication to break in pretty easy (to me) ways.  The easiest was to 
fill the disk.  Once replication broke, it was really hard to get 
everything back in sync.

While DRBD does have some overhead, it's only in writing, and we've got 
very fast disk and network between the two systems.  In our testing 
there's about a 5-30 second failover time between failure of the primary 
system and the secondary system picking it up, getting primary of drbd, 
mounting and checking the FS, and then starting and checking MySQL.

IIRC, using NDB (Cluster) requires that most of the data reside in 
memory.  Since we have a 75GB+ database, this isn't really an option for us.

-Mark


_______________________________________________
gnhlug-discuss mailing list
gnhlug-discuss@mail.gnhlug.org
http://mail.gnhlug.org/mailman/listinfo/gnhlug-discuss/

Reply via email to