Re: [zfs-discuss] zpool split how it works?

2010-11-15 Thread Craig Cory
>From the OpenSolaris ZFS FAQ page:

http://hub.opensolaris.org/bin/view/Community+Group+zfs/faq

> If you want to use a hardware-level backup or snapshot feature instead of
the ZFS snapshot feature, then you will need to do the following steps:

* zpool export pool-name
* Hardware-level snapshot steps
* zpool import pool-name





sridhar surampudi wrote:
> Hi Darren,
>
> Thanks for your info.
>
> Sorry below might be lengthy :
>
> Yes I am looking for actual implementation rather how to use zpool split.
>
> My requirement is not at zfs file system level and also not zfs snapshot.
>
>
> As I understood,
> If my zpool say mypool is created using  " zpool create mypool mirror device1
> device2"
> and  after running  : zfs split mypool  newpool device2 ", I can access
> device2 with newpool
>
> Same data on newpool is available as mypool as long as there are no
> writes/modifications to newpool.
>
> What i am looking for is,
>
> if my devices ( say zpool is created with only one device device1) are from an
> array and I took array snapshot ( zfs /zpool doesn't come in picture as I take
> hardware snapshot), I will get a snapshot device say device2.
>
> I am  looking for a way to use the snapshot device device2 by recreating the
> zpool and zfs stack with an alternate name.
>
> "zpool split" must be doing some changes to metadata of device2 to associate
> with the new name i.e. newpool,
>
> I want to do it for the same for snapshot device created using array/hardware
> snapshot.
>
> Thanks & Regards,
> sridhar.
> --
> This message posted from opensolaris.org
> ___
> zfs-discuss mailing list
> zfs-discuss@opensolaris.org
> http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
>


-- 
Craig Cory
 Senior Instructor :: ExitCertified
 : Sun Certified System Administrator
 : Sun Certified Network Administrator
 : Sun Certified Security Administrator
 : Veritas Certified Instructor

 8950 Cal Center Drive
 Bldg 1, Suite 110
 Sacramento, California  95826
 [e] craig.c...@exitcertified.com
 [p] 916.669.3970
 [f] 916.669.3977

+-+
 ExitCertified :: Excellence in IT Certified Education

  Certified training with Oracle, Sun Microsystems, Apple, Symantec, IBM,
   Red Hat, MySQL, Hitachi Storage, SpringSource and VMWare.

 1.800.803.EXIT (3948)  |  www.ExitCertified.com
+-+
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi Darren,

Thanks for your info. 

Sorry below might be lengthy :

Yes I am looking for actual implementation rather how to use zpool split.

My requirement is not at zfs file system level and also not zfs snapshot. 


As I understood,
If my zpool say mypool is created using  " zpool create mypool mirror device1 
device2" 
and  after running  : zfs split mypool  newpool device2 ", I can access device2 
with newpool  

Same data on newpool is available as mypool as long as there are no 
writes/modifications to newpool.

What i am looking for is, 

if my devices ( say zpool is created with only one device device1) are from an 
array and I took array snapshot ( zfs /zpool doesn't come in picture as I take 
hardware snapshot), I will get a snapshot device say device2. 

I am  looking for a way to use the snapshot device device2 by recreating the 
zpool and zfs stack with an alternate name.

"zpool split" must be doing some changes to metadata of device2 to associate 
with the new name i.e. newpool,

I want to do it for the same for snapshot device created using array/hardware 
snapshot. 

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi,

I was wondering how zpool split works or implemented.

If a pool pool1 is on a mirror having two devices dev1 and dev2 then using 
zpool split I can split with the new pool name say pool-mirror on dev2. 

How split can change metadata on dev2 and rename/replace and associate with new 
name i.e. pool-mirror ??

Could you please let me know more about it?

Thanks & Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Mark J Musante

On Wed, 10 Nov 2010, Darren J Moffat wrote:


On 10/11/2010 11:18, sridhar surampudi wrote:

I was wondering how zpool split works or implemented.

Or are you really asking about the implementation details ?  If you want 
to know how it is implemented then you need to read the source code.


Also or you can read the blog entry I wrote up after it was put back:

http://blogs.sun.com/mmusante/entry/seven_years_of_good_luck
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread Darren J Moffat

On 10/11/2010 11:18, sridhar surampudi wrote:

I was wondering how zpool split works or implemented.

If a pool pool1 is on a mirror having two devices dev1 and dev2 then using 
zpool split I can split with the new pool name say pool-mirror on dev2.

How split can change metadata on dev2 and rename/replace and associate with new 
name i.e. pool-mirror ??


Exactly what isn't clear from the description in the man page ?

 zpool split [-R altroot] [-n] [-o mntopts] [-o
 property=value] pool newpool [device ...]

 Splits off one disk from each mirrored top-level vdev in
 a  pool and creates a new pool from the split-off disks.
 The original pool must be made up of one or more mirrors
 and must not be in the process of resilvering. The split
 subcommand chooses the last device in each  mirror  vdev
 unless  overridden by a device specification on the com-
 mand line.

 When using a device argument, split includes the  speci-
 fied  device(s)  in  a  new pool and, should any devices
 remain unspecified, assigns the last device in each mir-
 ror  vdev  to that pool, as it does normally. If you are
 uncertain about the outcome of a split command, use  the
 -n  ("dry-run")  option to ensure your command will have
 the effect you intend.

Or are you really asking about the implementation details ?  If you want 
to know how it is implemented then you need to read the source code.


Here would be a good starting point:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/lib/libzfs/common/libzfs_pool.c#zpool_vdev_split

Which ends up in kernel here:

http://src.opensolaris.org/source/xref/onnv/onnv-gate/usr/src/uts/common/fs/zfs/zfs_ioctl.c#zfs_ioc_vdev_split


--
Darren J Moffat
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss