Actually, I did this very thing a couple of years ago with M9000s and EMC DMX4s 
... with the exception of the "same host" requirement you have (i.e. the thing 
that requires the GUID change).

If you want to import the pool back into the host where the cloned pool is also 
imported, it's not just the zpool's GUID that needs to be changed, but all the 
vdevs in the pool too.

When I did some work on OpenSolairis in Amazon S3, I noticed that someone had 
build a zpool mirror split utility (before we had the real thing) as a means to 
clone boot disk images. IIRC it was just a hack of zdb, but with the ZFS source 
out there it's not that impossible to take a zpool and change all its GUIDs, 
it's just not that trivial (the Amazon case only handled a single simple 
mirrored vdev).

Anyway, back to my EMC scenario...

The dear data centre staff I had to work with mandated the use of good old EMC 
BCVs. I pointed out that ZFS's "always consistent in disk" promise meant that 
it would "just work" but that this required an consistent snapshot of all the 
LUNs in the pool (a feature in addition to basic BCVs that EMC charged even 
more for). Hoping to save money, my customer ignored my advice, and very 
quickly learned the error of their ways!

The "always consistent on disk" promise cannot be honoured if the vdev are 
snapshot at different times. On a quiet system you may get lucky in simple 
tests, only to find that a snapshot from a busy production system causes a 
system panic on import (although the more recent automatic uberblock recovery 
may save you).

The other thing I would add to your procedure is to take a ZFS snapshot just 
before taking the storage level snapshot. You could sync this with quiescing 
applications, but the real benefit is that you have a known point in time where 
all non-sync application level writes are temporally consistent.

Phil
http://harmanholistix.com

On 15 Nov 2010, at 10:11, sridhar surampudi <toyours_srid...@yahoo.co.in> wrote:

> Hi I am looking in similar lines,
> 
> my requirement is 
> 
> 1. create a zpool on one or many devices ( LUNs ) from an array ( array can 
> be IBM or HPEVA or EMC etc.. not SS7000).
> 2. Create file systems on zpool
> 3. Once file systems are in use (I/0 is happening) I need to take snapshot at 
> array level
> a. Freeze the zfs flle system ( not required due to zfs consistency : source 
> : mailing groups)
> b. take array snapshot ( say .. IBM flash copy )
> c. Got new snapshot device (having same data and metadata including same GUID 
> of source pool)
> 
>  Now I need a way to change the GUID and pool of snapshot device so that the 
> snapshot device can be accessible on same host or an alternate host (if the 
> LUN is shared).
> 
> Could you please post commands for the same.
> 
> Regards,
> sridhar.
> -- 
> This message posted from opensolaris.org
> _______________________________________________
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to