Re[2]: [zfs-discuss] sharing a storage array
Hello Jeff, Friday, July 28, 2006, 4:21:42 PM, you wrote: JV> Now that I've gone and read the zpool man page :-[ it seems that only whole JV> disks can be exported/imported. No, it's not that way. If you create a pool from slices you'll be able to import/export only those slices. So if you will create two slices on one shared LUN you will be able to import each pool on a different server. However such config could be unfortunate due to other reasons others have stated. -- Best regards, Robertmailto:[EMAIL PROTECTED] http://milek.blogspot.com ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
Richard Elling wrote: Danger Will Robinson... Jeff Victor wrote: Jeff Bonwick wrote: If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Cool. Does this method require assigning each disk to one pool or the other, or can disks be divided into partitions before pool assignment? The problem with slicing disks and sharing the slices is that you are more prone to fatal operational mistakes. For storage where isolation is enforced, SCSI reservations are often used. SCSI reservations work on a per-LUN basis, not a per-slice basis because SCSI has no concept of slices (or partitions). A safer approach is to work only at a per-LUN level for sharing disks. Also from a performance point of view... If the LUN is a local disk, sharing slices can turn what would be sequential disk I/O into random I/O, not a good thing if the slices are used concurrently. If the LUN is on a storage array, then per-LUN caching, read-ahead, write-behind, RAID overhead, striped sized I/Os, are all impacted. Sharing slices on a single LUN can and does cause unforeseen performance problems, often very hard to diagnose. The old KISS policy of creating one big LUN, then carving it up into disk slices or volume manager controlled volumes, often causes problems later on in the LUN's life. Jim -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
Richard Elling wrote: Danger Will Robinson... Jeff Victor wrote: Jeff Bonwick wrote: Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Cool. Does this method require assigning each disk to one pool or the other, or can disks be divided into partitions before pool assignment? The problem with slicing disks and sharing the slices is that you are more prone to fatal operational mistakes. For storage where isolation is enforced, SCSI reservations are often used. SCSI reservations work on a per-LUN basis, not a per-slice basis because SCSI has no concept of slices (or partitions). A safer approach is to work only at a per-LUN level for sharing disks. -- richard Now that I've gone and read the zpool man page :-[ it seems that only whole disks can be exported/imported. -- -- Jeff VICTOR Sun Microsystemsjeff.victor @ sun.com OS AmbassadorSr. Technical Specialist Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
Danger Will Robinson... Jeff Victor wrote: Jeff Bonwick wrote: If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Cool. Does this method require assigning each disk to one pool or the other, or can disks be divided into partitions before pool assignment? The problem with slicing disks and sharing the slices is that you are more prone to fatal operational mistakes. For storage where isolation is enforced, SCSI reservations are often used. SCSI reservations work on a per-LUN basis, not a per-slice basis because SCSI has no concept of slices (or partitions). A safer approach is to work only at a per-LUN level for sharing disks. -- richard ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
Jeff Bonwick wrote: If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Cool. Does this method require assigning each disk to one pool or the other, or can disks be divided into partitions before pool assignment? -- -- Jeff VICTOR Sun Microsystemsjeff.victor @ sun.com OS AmbassadorSr. Technical Specialist Solaris 10 Zones FAQ:http://www.opensolaris.org/os/community/zones/faq -- ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
On 7/28/06, Jeff Bonwick <[EMAIL PROTECTED]> wrote: > I have a SAS array with a zfs pool on it. zfs automatically searches for > and mounts the zfs pool I've created there. I want to attach another > host to this array, but it doesn't have any provision for zones or the > like. (Like you would find in an FC array or in the switch infrastructure.) > > Will host-b not automatically mount filesystems on pools created on host-a, > and vice versa, or is this going to be a problem. Ideally, I could create > a single pool and mount some filesystems on host-a, and some on host-b, but > barring that even just being able to have 2 pools and each can be mounted > on one of the hosts would be great. > > If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Hi okay just wondering but can you define "won't work" will ZFS spot someone else writing to the disks and refuse to do any work? will it spill its guts all over the dumpdevice? or just fight each other correcting each others changes? James Dickens uadmin.blogspot.com Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
> I have a SAS array with a zfs pool on it. zfs automatically searches for > and mounts the zfs pool I've created there. I want to attach another > host to this array, but it doesn't have any provision for zones or the > like. (Like you would find in an FC array or in the switch infrastructure.) > > Will host-b not automatically mount filesystems on pools created on host-a, > and vice versa, or is this going to be a problem. Ideally, I could create > a single pool and mount some filesystems on host-a, and some on host-b, but > barring that even just being able to have 2 pools and each can be mounted > on one of the hosts would be great. > > If one host failed I want to be able to do a manual mount on the other host. Multiple hosts writing to the same pool won't work, but you could indeed have two pools, one for each host, in a dual active-passive arrangement. That is, you dual-attach the storage with host A talking to pool A and host B talking to pool B. If host A fails, it can 'zpool import -f B' to start serving up the B data. HA-ZFS (part of SunCluster 3.2) will automate this, but for now you can roll your own along these lines. Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
> > bonus questions: any idea when hot spares will make it to S10? > > good question :) It'll be in U3, and probably available as patches for U2 as well. The reason for U2 patches is Thumper (x4500), because we want ZFS on Thumper to have hot spares and double-parity RAID-Z from day one. Jeff ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
On Thu, Jul 27, 2006 at 10:41:10PM -0700, Frank Cusack wrote: > On July 28, 2006 11:59:50 AM +1000 grant beattie <[EMAIL PROTECTED]> wrote: > > > >ZFS won't automatically import a pool unless it is explicitly exported > >first via "zfs export", so it should be safe to do this, but it has to > >be done at the pool level, not the filesystem level. > > Just to clarify, that would be 'zpool export', yeah? gah, yes... sorry :) grant. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
On July 28, 2006 11:59:50 AM +1000 grant beattie <[EMAIL PROTECTED]> wrote: ZFS won't automatically import a pool unless it is explicitly exported first via "zfs export", so it should be safe to do this, but it has to be done at the pool level, not the filesystem level. Just to clarify, that would be 'zpool export', yeah? -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] sharing a storage array
On Thu, Jul 27, 2006 at 06:35:06PM -0700, Frank Cusack wrote: > Hi > > I have a SAS array with a zfs pool on it. zfs automatically searches for > and mounts the zfs pool I've created there. I want to attach another > host to this array, but it doesn't have any provision for zones or the > like. (Like you would find in an FC array or in the switch infrastructure.) > > Will host-b not automatically mount filesystems on pools created on host-a, > and vice versa, or is this going to be a problem. Ideally, I could create > a single pool and mount some filesystems on host-a, and some on host-b, but > barring that even just being able to have 2 pools and each can be mounted > on one of the hosts would be great. > > If one host failed I want to be able to do a manual mount on the other host. ZFS won't automatically import a pool unless it is explicitly exported first via "zfs export", so it should be safe to do this, but it has to be done at the pool level, not the filesystem level. > bonus questions: any idea when hot spares will make it to S10? good question :) grant. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] sharing a storage array
Hi I have a SAS array with a zfs pool on it. zfs automatically searches for and mounts the zfs pool I've created there. I want to attach another host to this array, but it doesn't have any provision for zones or the like. (Like you would find in an FC array or in the switch infrastructure.) Will host-b not automatically mount filesystems on pools created on host-a, and vice versa, or is this going to be a problem. Ideally, I could create a single pool and mount some filesystems on host-a, and some on host-b, but barring that even just being able to have 2 pools and each can be mounted on one of the hosts would be great. If one host failed I want to be able to do a manual mount on the other host. bonus questions: any idea when hot spares will make it to S10? thanks -frank ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss