On May 12, 2010, at 3:06 PM, Manoj Joseph <manoj.p.jos...@oracle.com> wrote:

Ross Walker wrote:
On May 12, 2010, at 1:17 AM, schickb <schi...@gmail.com> wrote:

I'm looking for input on building an HA configuration for ZFS. I've
read the FAQ and understand that the standard approach is to have a
standby system with access to a shared pool that is imported during
a failover.

The problem is that we use ZFS for a specialized purpose that
results in 10's of thousands of filesystems (mostly snapshots and
clones). All versions of Solaris and OpenSolaris that we've tested
take a long time (> hour) to import that many filesystems.

I've read about replication through AVS, but that also seems require
an import during failover. We'd need something closer to an active-
active configuration (even if the second active is only modified
through replication). Or some way to greatly speedup imports.

Any suggestions?

Bypass the complexities of AVS and the start-up times by implementing
a ZFS head server in a pair of ESX/ESXi with Hot-spares using
redundant back-end storage (EMC, NetApp, Equalogics).

Then, if there is a hardware or software failure of the head server or
the host it is on, the hot-spare automatically kicks in with the same
running state as the original.

By hot-spare here, I assume you are talking about a hot-spare ESX
virtual machine.

If there is a software issue and the hot-spare server comes up with the same state, is it not likely to fail just like the primary server? If it
does not, can you explain why it would not?

That's a good point and worth looking into. I guess it would fail as well as a vmware hot-spare is like a vm in constant vmotion where active memory is mirrored between the two.

I suppose one would need a hot-spare for hardware failure and a cold- spare for software failure. Both scenarios are possible with ESX, the cold spare I suppose in this instance would be the original VM rebooting.

Recovery time would be about the same in this instance as an AVS solution that has to mount 10000 mounts though, so it wins with a hardware failure and ties with a software failure, but wins with ease of setup and maintenance, but looses with additional cost. Guess it all depends on your risk analysis whether it is worth it.

-Ross

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to