Sorry if these questions will sound stupid, but I was not able to find an 
answer by googling. 

1. Does iSCSI protocol support having multiple target servers to serve the same 
disk/block device? 

In case of ceph, the same rbd disk image. I was hoping to have multiple servers 
to mount the same rbd disk and serve it as an iscsi LUN. This LUN would be used 
as a vm image storage on vmware / xenserver. 

2.Does iscsi multipathing provide failover/HA capability only on the initiator 
side? The docs that i came across all mention multipathing on the client side, 
like using two different nics. I did not find anything about having multiple 
nics on the initiator connecting to multiple iscsi target servers. 

I was hoping to have resilient solution on the storage side so that I can 
perform upgrades and maintenance without needing to shutdown vms running on 
vmware/xenserver. Is this possible with iscsi? 

Cheers 

Andrei 
----- Original Message -----

From: "Leen Besselink" <l...@consolejunkie.net> 
To: ceph-users@lists.ceph.com 
Sent: Saturday, 10 May, 2014 8:31:02 AM 
Subject: Re: [ceph-users] NFS over CEPH - best practice 

On Fri, May 09, 2014 at 12:37:57PM +0100, Andrei Mikhailovsky wrote: 
> Ideally I would like to have a setup with 2+ iscsi servers, so that I can 
> perform maintenance if necessary without shutting down the vms running on the 
> servers. I guess multipathing is what I need. 
> 
> Also I will need to have more than one xenserver/vmware host servers, so the 
> iscsi LUNs will be mounted on several servers. 
> 

So you have multiple machines talking to the same LUN at the same time ? 

You'll have to co-ordinate how changes are written to the backing store, 
normally you'd have the virtualization servers use some kind of protocol. 

When it's SCSI there are the older Reserve/Release commands and the newer 
SCSI-3 Persistent Reservation commands. 

(i)SCSI allows multiple changes to be in-flight, without coordination things 
will go wrong. 

Below it was mentioned that you can disable the cache for rbd, if you have no 
coordination protocol you'll need to do the same on the iSCSI-side. 

I believe when you do that it will be slower, but it might work. 

> Would the suggested setup not work for my requirements? 
> 

It depends on VMWare if they allow such a setup. 

Then there is an other thing. How do the VMWare machines coordinate which VM 
they should be running ? 

I don't know VMWare but usually if you have some kind of clustering setup 
you'll need to have a 'quorum'. 

A lot of times the quorum is handled by a quorum disk with the SCSI coordiation 
protocols mentioned above. 

An other way to have a quorum is to have a majority voting system with an 
un-even number of machines talking over the network. This is what Ceph monitor 
nodes do. 

As an example of a clustering system that allows it to be used without a quorum 
disk with only 2 machines talking over the network is Linux Pacemaker. When 
something bad happends, one machine will just turn off the power of the other 
machine to prevent things going wrong (this is called STONITH). 

> Andrei 
> ----- Original Message ----- 
> 
> From: "Leen Besselink" <l...@consolejunkie.net> 
> To: ceph-users@lists.ceph.com 
> Sent: Thursday, 8 May, 2014 9:35:21 PM 
> Subject: Re: [ceph-users] NFS over CEPH - best practice 
> 
> On Thu, May 08, 2014 at 01:24:17AM +0200, Gilles Mocellin wrote: 
> > Le 07/05/2014 15:23, Vlad Gorbunov a écrit : 
> > >It's easy to install tgtd with ceph support. ubuntu 12.04 for example: 
> > > 
> > >Connect ceph-extras repo: 
> > >echo deb http://ceph.com/packages/ceph-extras/debian $(lsb_release 
> > >-sc) main | sudo tee /etc/apt/sources.list.d/ceph-extras.list 
> > > 
> > >Install tgtd with rbd support: 
> > >apt-get update 
> > >apt-get install tgt 
> > > 
> > >It's important to disable the rbd cache on tgtd host. Set in 
> > >/etc/ceph/ceph.conf: 
> > >[client] 
> > >rbd_cache = false 
> > [...] 
> > 
> > Hello, 
> > 
> 
> Hi, 
> 
> > Without cache on the tgtd side, it should be possible to have 
> > failover and load balancing (active/avtive) multipathing. 
> > Have you tested multipath load balancing in this scenario ? 
> > 
> > If it's reliable, it opens a new way for me to do HA storage with iSCSI ! 
> > 
> 
> I have a question, what is your use case ? 
> 
> Do you need SCSI-3 persistent reservations so multiple machines can use the 
> same LUN at the same time ? 
> 
> Because in that case I think tgtd won't help you. 
> 
> Have a good day, 
> Leen. 
> _______________________________________________ 
> ceph-users mailing list 
> ceph-users@lists.ceph.com 
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 
> 
_______________________________________________ 
ceph-users mailing list 
ceph-users@lists.ceph.com 
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to