Re: [zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-17 Thread sridhar surampudi
Hi,

My understanding is ZFS itself is a great file system by combining fs/vm with 
the numerous  feature added to it.

In the similar lines existing fs/vm and array snapshot are still in use and 
customers is requesting similar kind of support for zfs. 

So it would be very great help of getting similar interface to match the use 
cases looking for along with the new features.

If a customer is having Solaris 9/10 with UFS /SVM stack, back applications run 
in the mentioned method ( earlier thread and other threads).

If somebody moves from UFS/SVM stack to ZFS, customer expects the backup 
application should run in the similar configuration (at least for now).

To match the requirements, zfs/zpool support is required. 

Off course down the line I am sure applications starts new methods provided by 
zfs to support it.

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool import is this safe to use -f option in this case ?

2010-11-16 Thread sridhar surampudi
Hi,

I have done the following (which is required for my case)

Created a zpool (smpool) on a device/LUN from an array (IBM 6K) on host1
created a array level snapshot of the device using dscli to another device 
which is successful.
Now I make the snapshot device visible to another host (host2)

I tried zpool import smpool. Got a warning message that host1 is using this 
pool (might be the smpool metata data has stored this info) and asked to use 
-f

When i tried zpool import with -f option, I am able to successfully import to 
host2 and able to access all file systems and snapshots. 

My query is in this scenario is always safe to use -f to import ??
would there be any issues ? 
Also I have observed that zpool import took some time to for successful 
completion. Is there a way minimize zpool import -f operation time ??

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread sridhar surampudi
Hi Andrew,

Regarding your point 
-
You will not be able to access the hardware 
snapshot from the system which has the original zpool mounted, because 
the two zpools will have the same pool GUID (there's an RFE outstanding 
on fixing this).

Could you please provide more references to above? as I am looking for options 
of accessing the snapshot device by reconfiguring it with an new pool name ( in 
turn as new GUID).

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Changing GUID

2010-11-15 Thread sridhar surampudi
Hi I am looking in similar lines,

my requirement is 

1. create a zpool on one or many devices ( LUNs ) from an array ( array can be 
IBM or HPEVA or EMC etc.. not SS7000).
2. Create file systems on zpool
3. Once file systems are in use (I/0 is happening) I need to take snapshot at 
array level
 a. Freeze the zfs flle system ( not required due to zfs consistency : source : 
mailing groups)
 b. take array snapshot ( say .. IBM flash copy )
 c. Got new snapshot device (having same data and metadata including same GUID 
of source pool)

  Now I need a way to change the GUID and pool of snapshot device so that the 
snapshot device can be accessible on same host or an alternate host (if the LUN 
is shared).

Could you please post commands for the same.

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-15 Thread sridhar surampudi
Hi,

How it would help for instant recovery  or point in time recovery ?? i.e 
restore data at device/LUN level ?

Currently it is easy as I can unwind the primary device stack and restore data 
at device/ LUN level and recreate stack.

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-14 Thread sridhar surampudi
Hi Darren,

Thanks you for the details. I am aware of export/import of zpool. but with 
zpool export pool is not available for writes. 

is there a way I can freeze zfs file system at file system level.
As an example, for JFS file system using chfs -a freeze ... option.
So if I am taking a hardware snapshot, I will run chfs at file system (jfs ) 
level then fire commands to take snapshot at harware level (or for array LUNS) 
to get consistent backup. I thins case, no down time is required for the file 
system.

Once snapshot is done, i will do qu quiesce / freeze the file system.

Looking for how to do similar freeze for zfs file system.

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-14 Thread sridhar surampudi
Hi Darren,

In shot I am looking a way to freeze and thaw for zfs file system so that for 
harware snapshot, i can do
1. run zfs freeze 
2. run hardware snapshot on devices belongs to the zpool where the given file 
system is residing.
3. run zfs thaw

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to quiesce and unquiesc zfs and zpool for array/hardware snapshots ?

2010-11-12 Thread sridhar surampudi
Hi,

How I can I quiesce / freeze all writes to zfs and zpool if want to take 
hardware level snapshots or array snapshot of all devices under a pool ?
are there any commands or ioctls or apis available ?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi,

I was wondering how zpool split works or implemented.

If a pool pool1 is on a mirror having two devices dev1 and dev2 then using 
zpool split I can split with the new pool name say pool-mirror on dev2. 

How split can change metadata on dev2 and rename/replace and associate with new 
name i.e. pool-mirror ??

Could you please let me know more about it?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zpool split how it works?

2010-11-10 Thread sridhar surampudi
Hi Darren,

Thanks for your info. 

Sorry below might be lengthy :

Yes I am looking for actual implementation rather how to use zpool split.

My requirement is not at zfs file system level and also not zfs snapshot. 


As I understood,
If my zpool say mypool is created using   zpool create mypool mirror device1 
device2 
and  after running  : zfs split mypool  newpool device2 , I can access device2 
with newpool  

Same data on newpool is available as mypool as long as there are no 
writes/modifications to newpool.

What i am looking for is, 

if my devices ( say zpool is created with only one device device1) are from an 
array and I took array snapshot ( zfs /zpool doesn't come in picture as I take 
hardware snapshot), I will get a snapshot device say device2. 

I am  looking for a way to use the snapshot device device2 by recreating the 
zpool and zfs stack with an alternate name.

zpool split must be doing some changes to metadata of device2 to associate 
with the new name i.e. newpool,

I want to do it for the same for snapshot device created using array/hardware 
snapshot. 

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] is opensolaris support ended?

2010-11-10 Thread sridhar surampudi
Thanks for your help.

I would check this out.

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] changing zpool information

2010-11-09 Thread sridhar surampudi
Hi,

If I compare zpool is like volume group or disk group, as an example on AIX we 
have aixlvm.
AIX lvm provideds command like recreatevg by providing snashot devices.

In case of HPLVM or for Linux LVM, we can create a new vg/lv structure and add 
the snapshoted devices in that and then we import the vg.

I would like to know fo zpool for snapshot devices once array snapshot is taken.

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename zpool

2010-11-09 Thread sridhar surampudi
zfs close is at zfs file system level. what i am look here is rebuild the file 
system stack from bottom to top. Once i took the snapshot ( hardware) the 
snapshot devices carry same copy of data and meta data.

If my snapshot device is dev2 then, the metadata will have smpoolsnap. If I 
need to use dev2 on the same machine since smpoolsnap is already present on 
dev1 which throws error.
what I am looking for is if I can modify this metadata, I can use dev2 with an 
alternate name so that all file systems would be available with an alternate 
zpool name.

As I mentioned below, I can do it for HPLVM or AIXLVM ( recreatevg command) if 
i create a snapshot at array level.

Thanks  Regards,
sridhar
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] is opensolaris support ended?

2010-11-09 Thread sridhar surampudi
Hi,

I have downloaded and using opensolaris virtual box image which shows below 
versions

zfs version 3
zpool version 14

cat /etc/release shows
2009.06 snv_111b X86

Is this final build available ??
Can i upgrade it to higher version of zfs/zpool ?
can i get any updage vdi image to seek zfs/zpool having zpool split support?
please help

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] how to upgrade

2010-10-22 Thread sridhar surampudi
Hi,

zfs upgrade shows version as 4 and zpool upgrade shows version as 15.

and etc/release show Solaris 10 10/09 s10s_u8wos_08a SPARC.

And my zpool doen't have support for split.

Could you please suggest me how to upgrade my Solaris box with latest version 
for zfs and zpool to get 
updated support. 


Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] rename zpool

2010-10-20 Thread sridhar surampudi
Hi Cindys,

Thank you for reply. 

zfs/ zpool should have ability of accessing snapshot devices with a 
configurable name. 

As an example if file system stack is created as 
vxfs( /mnt1)
  |
  |
vxvm(lv1)
  | 
  |
(device from an array / LUN  say dev1),


If i take array level or hardware level snapshot, and the snapshot device for 
dev1 is say dev2 then we can rename dev2 properties to access the dev2 with new 
name say lv1_snap. And further access file system
with /mnt1_newmount.

This also can be done if hplvm also present.

I am looking for similar kind of ability with zfs/zpool presence.

  zfs (smpool/fs1)
   |
   |
  smpool
  |
  |
 (device from an array / LUN  say dev1)

Once I took array or hardware snapshot of dev1 and got dev2, I am looking for 
something which I can access the snapshot device dev2 is accessible with 
smpoolsnap or any name keeping smpool active.
And further fs1 is accessible as smpoolsnap/fs1  as read only. 

If zfs/zpool do not have this support is there a way I can open an issue so 
that in next releases it will be available ??

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] rename zpool

2010-10-19 Thread sridhar surampudi
Hi,

I have two questions:
1) Is there any way of renaming zpool without export/import ??

2) If I took hardware snapshot of devices under a zpool ( where the snapshot 
device will be exact copy including metadata i.e zpool and associated file 
systems) is there any way to rename zpool name of snapshotted devices ??  
without losing data part?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] best way to check zfs/zpool version programatically

2010-10-09 Thread sridhar surampudi
Hi,

what is the right way to check versions of zfs and zpool ??

I am writing piece of code which call zfs command line further. Before actually 
initiating and going ahead I wan to check the kind of version zfs and zpool 
present on the system.

As an example zpool split is not present on prior to 134 build (not sure 
exactly) so if an application using my code should fail if zpool split is 
called. 

Is there any zfs version or zpool version command present in recent updates? or 
is there any other way ( reading some file ) I can get these details?


Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving newly created pool to alternate host

2010-10-08 Thread sridhar surampudi
Hi Cindys,

Thank you for step by step explanation and it is quite clear now.

Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] moving newly created pool to alternate host

2010-10-07 Thread sridhar surampudi
Hi Cindys,

Thanks for your mail.  I have some further queries here based on your answer.
Once zfs split creates new pool (as per below example it is mypool_snap) can I 
access mypool_snap just by importing on the same host Host1 ??


what is the current access method of newly created  mypool_snap ? is it 
read-write or read only ? 
If it is read-write is there a way I can make it read only so that backup 
application cannot misuse. 

Also I want (or going to use ) mypool_snap as read only on alternate host i.e. 
host2.  Could you please let me know what all steps I need to take on host1 and 
then on host2 once zpool split is done. 

I can guess as after zpool split, mypool_snap is not visible to host1. Once 
needs to import explicitly. Instead of importing on same host i.e. host1 can i 
go to host2 where split node devices are visible and directly run zpool import 
mypool_snap which will be further used as read only for backup ??

Could you please provide more details.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] calling zfs snapshot on multiple file systems under same pool or dirrerent

2010-10-07 Thread sridhar surampudi
Hi,

I could able to call zfs snapshot on individual file system/ volume using zfs 
snapshot filesystem|volume

Or 
I can call zfs -r snapshot filesys...@snapshot-name to take all snapshots.

I there a way I can specify more than 1 file system/volume of same pool or 
different pool to call a single zfs snapshot ??

Ex: pool/fs1 , poo2/fs2
is there any mechanism i can call zfs snapshot pool/f...@snap1 pool2/f...@snap1

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Any way for snapshot zpool with RAIDZ or for independent devices

2010-10-04 Thread sridhar surampudi
Hi,
With recent additions, using zpool split I could split a mirrored zpool and 
create new pool with the given name. 

Is there any direct or indirect mechanism where I can create snapshot of 
devices under a existing zpool where new devices are created so that I can 
recreate stack (new zpool and all file systems) without modifying the data.

So that with new stack i can able to access consistent (snapshot ) data (which 
is same as data present on original when backup took) with the new stack 
created.


Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
Hi,

I have a mirror pool tank having two devices underneath. Created in this way

#zpool create tank mirror  c3t500507630E020CEAd1  c3t500507630E020CEAd0  

Created file system tank/home
#zfs create tank/home

Created another file system tank/home
#zfs create tank/home/sridhar
After that I have created files and directories under tank/home and 
tank/home/sridhar.

Now I detached 2nd device i.e c3t500507630E020CEAd0

Since the above device is part of mirror pool, my guess it will have copy of 
data which is there in other device till detach and will have metadata with 
same pool name and file systems created.

Question is is there any way I can create a new stack with renamed stack by 
providing the new pool name to this detached device and should access the same 
data for the c3t500507630E020CEAd0 which was created when it was in mirrored 
pool under tank. ?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
Thank you for your quick reply.

When I run below command it is showing.
bash-3.00# zpool upgrade
This system is currently running ZFS pool version 15.

All pools are formatted using this version.


How can I upgrade to new zpool and zfs versions so that I can have zpool split 
capabilities ?
I am bit new to Soaris and zfs.
Could you please help how can I upgrade ?

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] create mirror copy of existing zfs stack

2010-09-20 Thread sridhar surampudi
I am using solaris 10 9/10 SARC x64 version.

Following are output of release file and uname -a respectively.

bash-3.00# cat /etc/release 
  Solaris 10 10/09 s10s_u8wos_08a SPARC
   Copyright 2009 Sun Microsystems, Inc.  All Rights Reserved.
Use is subject to license terms.
   Assembled 16 September 2009
bash-3.00# uname -a
SunOS oigtsol12 5.10 Generic_141444-09 sun4u sparc SUNW,Sun-Fire-V440

I think zpool split is available with build 135. Not sure which build signifies 
of the version which I am using.

I have also tried zpool upgrade -a but didn't find any difference.

Thanks  Regards,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] recreating stack (zpool and zfs) once hardware snapshot is taken.

2010-09-15 Thread sridhar surampudi
Hi,

Follow is what I am looking for.

I need to take hardware snap shot of devices which are provided under zfs file 
system.

I could identify type of configuration using zpool status and get the list of 
disks.
For example : if a single LUN is assigned to zpool and a file system is created 
 for mpool/myfs.
Say the LUN is 1002.

so the stack would be myfs file system under mypool which is created from a LUN 
1002.

Once I took snapshot for LUN 1002 on 1003 then new device I got is 1003 as 
snapshot device..

I would like to know how can I recreate the stack similar to existing say zpool 
mypool_snap and file system myfs_snap.

I found some information that zpool export and import needs to be done before 
taking snapshot.
wondering how i can use it for my requirement.


Thanks,
sridhar.
-- 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss