On 02/15/2016 03:29 AM, Dominik Zalewski wrote:
> "Status:
> This code is now being ported to the upstream linux kernel reservation
> API added in this commit:
> 
> https://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/block/ioctl.c?id=bbd3e064362e5057cc4799ba2e4d68c7593e490b
> 
> When this is completed, LIO will call into the iblock backend which will
> then call rbd's pr_ops."
> 
> 
> Does anyone know how up to date this page?
> http://tracker.ceph.com/projects/ceph/wiki/Clustered_SCSI_target_using_RBD


I just updated it about two weeks ago, so it is current.

> 
> 
> Is currently only Suse supporting active/active multipath for RBD over
> iSCSI?  https://www.susecon.com/doc/2015/sessions/TUT16512.pdf
> 

Yes.

> 
> I'm trying to configure active/passive iSCSI gateway on OSD nodes
> serving RBD image. Clustering is done using pacemaker/corosync. Does
> anyone have a similar  working setup? Anything I should be aware of?

If you have a application that uses SCSI persistent reservations, then
the iscsi target scripts for upstream pacemaker will not work correctly,
because they do not copy over the reservation state.

If your app does clustering using its own fencing like oracle RAC, then
it might be ok, but it is not formally supported or tested by Red Hat.
You have to be careful about using correct settings. For example, you
cannot have write-caching turned on.

There are other issues you can find on various lists. Some people on
this list have got it working ok for their specific application or at
least have made other workarounds for any issues they were hitting.

> 
> 
> Thanks
> 
> 
> Dominik
> 
> 
> 
> On 21 January 2016 at 12:08, Dominik Zalewski <dzalew...@optlink.co.uk
> <mailto:dzalew...@optlink.co.uk>> wrote:
> 
>     Thanks Mike.
> 
>     Would you not recommend using iscsi and ceph under Redhat based
>     distros untill new code is in place?
> 
>     Dominik
> 
>     On 21 January 2016 at 03:11, Mike Christie <mchri...@redhat.com
>     <mailto:mchri...@redhat.com>> wrote:
> 
>         On 01/20/2016 06:07 AM, Nick Fisk wrote:
>         > Thanks for your input Mike, a couple of questions if I may
>         >
>         > 1. Are you saying that this rbd backing store is not in mainline 
> and is only in SUSE kernels? Ie can I use this lrbd on Debian/Ubuntu/CentOS?
> 
>         The target_core_rbd backing store is not upstream and only in
>         SUSE kernels.
> 
>         lrbd is the management tool that basically distributes the
>         configuration
>         info to the nodes you want to run LIO on. In that README you see
>         it uses
>         the target_core_rbd module by default, but last I looked there
>         is code
>         to support iblock too. So you should be able to use this with other
>         distros that do not have target_core_rbd.
> 
>         When I was done porting my code to a iblock based approach I was
>         going
>         to test out the lrbd iblock support and fix it up if it needed
>         anything.
> 
>         > 2. Does this have any positive effect on the abort/reset death loop 
> a number of us were seeing when using LIO+krbd and ESXi?
> 
>         The old code and my new approach does not really help. However, on
>         Monday, Ilya and I were talking about this problem, and he gave
>         me some
>         hints on how to add code to cancel/cleanup commands so we will
>         be able
>         to handle aborts/resets properly and so we will not fall into
>         that problem.
> 
> 
>         > 3. Can you still use something like bcache over the krbd?
> 
>         Not initially. I had been doing active/active across nodes by
>         default,
>         so you cannot use bcache and krbd as is like that.
> 
> 
> 
> 
>         >
>         >
>         >
>         >> -----Original Message-----
>         >> From: Mike Christie [mailto:mchri...@redhat.com 
> <mailto:mchri...@redhat.com>]
>         >> Sent: 19 January 2016 21:34
>         >> To: Василий Ангапов <anga...@gmail.com 
> <mailto:anga...@gmail.com>>; Ilya Dryomov
>         >> <idryo...@gmail.com <mailto:idryo...@gmail.com>>
>         >> Cc: Nick Fisk <n...@fisk.me.uk <mailto:n...@fisk.me.uk>>; Tyler 
> Bishop
>         >> <tyler.bis...@beyondhosting.net
>         <mailto:tyler.bis...@beyondhosting.net>>; Dominik Zalewski
>         >> <dzalew...@optlink.co.uk <mailto:dzalew...@optlink.co.uk>>;
>         ceph-users <ceph-users@lists.ceph.com
>         <mailto:ceph-users@lists.ceph.com>>
>         >> Subject: Re: [ceph-users] CentOS 7 iscsi gateway using lrbd
>         >>
>         >> Everyone is right - sort of :)
>         >>
>         >> It is that target_core_rbd module that I made that was
>         rejected upstream,
>         >> along with modifications from SUSE which added persistent
>         reservations
>         >> support. I also made some modifications to rbd so
>         target_core_rbd and krbd
>         >> could share code. target_core_rbd uses rbd like a lib. And it
>         is also
>         >> modifications to the targetcli related tool and libs, so you
>         can use them to
>         >> control the new rbd backend. SUSE's lrbd then handles
>         setup/management
>         >> of across multiple targets/gatways.
>         >>
>         >> I was going to modify targetcli more and have the user just
>         pass in the rbd
>         >> info there, but did not get finished. That is why in that
>         suse stuff you still
>         >> make the krbd device like normal. You then pass that to the
>         target_core_rbd
>         >> module with targetcli and that is how that module knows about
>         the rbd
>         >> device.
>         >>
>         >> The target_core_rbd module was rejected upstream, so I stopped
>         >> development and am working on the approach suggested by those
>         >> reviewers which instead of going from
>         lio->target_core_rbd->krbd goes
>         >> lio->target_core_iblock->linux block layer->krbd. With this
>         approach you
>         >> just use the normal old iblock driver and krbd and then I am
>         modifying them
>         >> to just work and do the right thing.
>         >>
>         >>
>         >> On 01/19/2016 05:45 AM, Василий Ангапов wrote:
>         >>> So is it a different approach that was used here by Mike
>         Christie:
>         >>> http://www.spinics.net/lists/target-devel/msg10330.html ?
>         >>> It seems to be a confusion because it also implements
>         target_core_rbd
>         >>> module. Or not?
>         >>>
>         >>> 2016-01-19 18:01 GMT+08:00 Ilya Dryomov <idryo...@gmail.com
>         <mailto:idryo...@gmail.com>>:
>         >>>> On Tue, Jan 19, 2016 at 10:34 AM, Nick Fisk
>         <n...@fisk.me.uk <mailto:n...@fisk.me.uk>> wrote:
>         >>>>> But interestingly enough, if you look down to where they
>         run the
>         >> targetcli ls, it shows a RBD backing store.
>         >>>>>
>         >>>>> Maybe it's using the krbd driver to actually do the Ceph
>         side of the
>         >> communication, but lio plugs into this rather than just
>         talking to a dumb block
>         >> device???
>         >>>>
>         >>>> It does use krbd driver.
>         >>>>
>         >>>> Thanks,
>         >>>>
>         >>>>                 Ilya
>         >
>         >
> 
> 
> 

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to