Re: [zfs-discuss] iscsi confusion
On Sat, Sep 29, 2012 at 3:09 AM, Edward Ned Harvey (opensolarisisdeadlongliveopensolaris) wrote: > I am confused, because I would have expected a 1-to-1 mapping, if you create > an iscsi target on some system, you would have to specify which LUN it > connects to. But that is not the case... Nope. one target can have anything from zero (which is kinda useless) or many LUNs. > I shouldn't be thinking in such linear terms. When I create an iscsi > target, don't think of it as connecting to a device - instead, think of it > as sort of a channel. Any initiator connecting to it can see any of the > devices that I have done add-views on. Yup > But each iscsi target can only be > used by one initiator at a time. Nope. Many people use iscsi to provide shared storage (e.g. for clustering), where two or more initiators connetcs to the same target. -- Fajar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then "stmfadm add-view $GUID", and then "itadm create-target." It's this last command that confuses me - Because it generates an iscsi target "iqn.blahblah"... And it will create as many as you specify, regardless of how many LUN's you have available. So how can I know which device I'm handing out to some initiator? And if an initiator connects to all those different iqn.blahblah addresses... What device will they actually be mucking around with? I'm not quite sure what in my brain is thinking wrong, but I'm guessing the explanation is something like this: (can anyone tell me if this is the correct interpretation?) I shouldn't be thinking in such linear terms. When I create an iscsi target, don't think of it as connecting to a device - instead, think of it as sort of a channel. Any initiator connecting to it can see any of the devices that I have done add-views on. But each iscsi target can only be used by one initiator at a time. Is that a good understanding? Thanks... ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] iscsi confusion
I am confused, because I would have expected a 1-to-1 mapping, if you create an iscsi target on some system, you would have to specify which LUN it connects to. But that is not the case... I read the man pages for sbdadm, stmfadm, itadm, and iscsiadm. I read some online examples, where you first "sbdadm create-lu" which gives you a GUID for a specific device in the system, and then "stmfadm add-view $GUID", and then "itadm create-target." It's this last command that confuses me - Because it generates an iscsi target "iqn.blahblah"... And it will create as many as you specify, regardless of how many LUN's you have available. So how can I know which device I'm handing out to some initiator? And if an initiator connects to all those different iqn.blahblah addresses... What device will they actually be mucking around with? I'm not quite sure what in my brain is thinking wrong, but I'm guessing the explanation is something like this: (can anyone tell me if this is the correct interpretation?) I shouldn't be thinking in such linear terms. When I create an iscsi target, don't think of it as connecting to a device - instead, think of it as sort of a channel. Any initiator connecting to it can see any of the devices that I have done add-views on. But each iscsi target can only be used by one initiator at a time. Is that a good understanding? Thanks... ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] iSCSI confusion
VMware will properly handle sharing a single iSCSI volume across multiple ESX hosts. We have six ESX hosts sharing the same iSCSI volumes - no problems. -Scott -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] iSCSI confusion
On May 23, 2010, at 6:05 PM, Chris Dunbar - Earthside, LLC wrote: > Hello, > > I think I know the answer to this, but not being an iSCSI expert I am hoping > to be pleasantly surprised by your answers. I currently use ZFS plus NFS to > host a shared VMFS store for my VMware ESX cluster. It's easy to set up and > high availability works great since all the ESX hosts see the same storage > pool. However, NFS performance has been pretty poor and I am looking for > other options. I do not currently use any SSD drives in my pool and I > understand adding a couple as ZIL devices might improve performance. I am > also thinking about switching to iSCSI. Here is my confusion/question. Is it > possible to share the same ZFS file system with multiple ESX hosts via iSCSI? Yes. > My belief is that an iSCSI connection is sort of like having a dedicated > physical drive and therefore does not lend itself to sharing between multiple > systems. No. That said, if a single iSCSI target is concurrently shared by two initiators, then the access needs to be controlled in some way, via a shared storage mechanism or reservations. -- richard -- Richard Elling rich...@nexenta.com +1-760-896-4422 ZFS and NexentaStor training, Rotterdam, July 13-15, 2010 http://nexenta-rotterdam.eventbrite.com/ ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] iSCSI confusion
Yes, it requires a clustered filesystem to share out a single LUN to multiple hosts. Vmfs3, however bad of an implementation, is in fact a clustered filesystem. I highly doubt nfs is your problem though. I'd take nfs over iscsi and vmfs any day. On May 23, 2010 8:06 PM, "Chris Dunbar - Earthside, LLC" < cdun...@earthside.net> wrote: Hello, I think I know the answer to this, but not being an iSCSI expert I am hoping to be pleasantly surprised by your answers. I currently use ZFS plus NFS to host a shared VMFS store for my VMware ESX cluster. It's easy to set up and high availability works great since all the ESX hosts see the same storage pool. However, NFS performance has been pretty poor and I am looking for other options. I do not currently use any SSD drives in my pool and I understand adding a couple as ZIL devices might improve performance. I am also thinking about switching to iSCSI. Here is my confusion/question. Is it possible to share the same ZFS file system with multiple ESX hosts via iSCSI? My belief is that an iSCSI connection is sort of like having a dedicated physical drive and therefore does not lend itself to sharing between multiple systems. Please set me straight. Thank you, Chris Dunbar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] iSCSI confusion
Hello, I think I know the answer to this, but not being an iSCSI expert I am hoping to be pleasantly surprised by your answers. I currently use ZFS plus NFS to host a shared VMFS store for my VMware ESX cluster. It's easy to set up and high availability works great since all the ESX hosts see the same storage pool. However, NFS performance has been pretty poor and I am looking for other options. I do not currently use any SSD drives in my pool and I understand adding a couple as ZIL devices might improve performance. I am also thinking about switching to iSCSI. Here is my confusion/question. Is it possible to share the same ZFS file system with multiple ESX hosts via iSCSI? My belief is that an iSCSI connection is sort of like having a dedicated physical drive and therefore does not lend itself to sharing between multiple systems. Please set me straight. Thank you, Chris Dunbar ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss