Re: [zfs-discuss] zfs with SAN / cluster problem

2008-04-26 Thread Christophe Rolland
 Note that it is expected that the cluster will force
 import, so in a
i was talking about creation, not import.


 You must be running an older version of Solaris.  The
s10u4 + sc 3.2
Anyway, bug has now been accepted.
With cluster and SAN, zfs does _yet_ not behave normally :)

Thanks for your answer.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] zfs with SAN / cluster problem

2008-04-11 Thread Christophe Rolland
When moving pools, we use of course export/import or sczbt suncluster stuff.
Nevertheless, we dont want to use zfs as global FS with concurrent access, just 
use it like svm or vxvm to declare volumes usable by cluster's nodes (and 
used by only once at a time).

so, it seems to me a bit problematic that once a LUN has been used by a storage 
resource declared on a node, i still can zpool it on the other node without 
warning. At least, VxVm asks for a -f flag.

thanks for your answer, Tim.
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] zfs with SAN / cluster problem

2008-04-08 Thread Christophe Rolland
Hi

I got a san disk visible on two nodes (global or zone).
On the first node, i can create a pool using zpool create x1 sandisk.
If i try to reuse this disk on the first node, i got a vdev in use warning.
If i try to create a pool on the second node using the same disk, zpool create 
x2 sandisk, it works fine, without warning, before leading to obvious problems.

I am using sol10 u4 .
did anyone encounter the same problem on opensolaris or s10 ? 
What could i be missing ?
this happens whatever the NOINUSE_CHECK variable is set to.

thanks a lot
christophe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] ZFS and SAN

2008-02-11 Thread Christophe Rolland
Hi Robert,

thanks for the answer.

 You are not the only one. It's somewhere on ZFS developers list...
yes, i checked this on the whole list.
so, lets wait for the feature.

 Actually it should complain and using -f (force)
on the active node, yes.
but if we want to reuse the luns on the other node, there is no warning.

 CR - what could be some interesting tools to test IO
 Check for filebench (included with recent SXCE).
i ll try it.

thanks for your answer
christophe
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] ZFS and SAN

2008-02-01 Thread Christophe Rolland
Hi all
we consider using ZFS for various storages (DB, etc). Most features are great, 
especially the ease of use.
Nevertheless, a few questions :

- we are using SAN disks, so most JBOD recommandations dont apply, but I did 
not find many experiences of zpool of a few terabytes on Luns... anybody ?

- we cannot remove a device from a pool. so no way of correcting the attachment 
of a 200 GB LUN on a 6 TB pool on which oracle runs ... am i the only one 
worrying ? 

- on a sun cluster, luns are seen on both nodes. Can we prevent mistakes like 
creating a pool on already assigned luns ? for example, veritas wants a force 
flag. With ZFS i can do :
node1: zpool create X add lun1 lun2
node2 : zpool create Y add lun1 lun2
and then, results are unexpected, but pool X will never switch again ;-) 
resource and zone are dead.

- what could be some interesting tools to test IO perfs ? did someone run 
iozone and publish baseline, modifications and according results ?

well, anyway, thanks to zfs team :D
 
 
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss