Hi Jason,

I understand. Thank you for your explanation.

Best regards,

On Mar 9, 2018 3:45 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:

> On Thu, Mar 8, 2018 at 3:41 PM, Lazuardi Nasution
> <mrxlazuar...@gmail.com> wrote:
> > Hi Jason,
> >
> > If there is the case that the gateway cannot access the Ceph, I think you
> > are right. Anyway, I put iSCSI Gateway on MON node.
>
> It's connectivity to the specific OSD associated to the IO operation
> that is the issue. If you understand the risks and are comfortable
> with them, active/active is a perfectly acceptable solution. I just
> wanted to ensure you understood the risk since you stated corruption
> "seems impossible".
>
> > Best regards,
> >
> >
> > On Mar 9, 2018 1:41 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:
> >
> > On Thu, Mar 8, 2018 at 12:47 PM, Lazuardi Nasution
> > <mrxlazuar...@gmail.com> wrote:
> >> Jason,
> >>
> >> As long you don't activate any cache and single image for single client
> >> only, it seem impossible to have old data overwrite. May be, it is
> related
> >> to I/O pattern too. Anyway, maybe other Ceph users have different
> >> experience. It can be different result with different case.
> >
> > Write operation (A) is sent to gateway X who cannot access the Ceph
> > cluster so the IO is queued. The initiator's multipath layer times out
> > and resents write operation (A) to gateway Y, followed by write
> > operation (A') to gateway Y. Shortly thereafter, gateway X is able to
> > send its delayed write operation (A) to the Ceph cluster and
> > overwrites write operation (A') -- thus your data went back in time.
> >
> >> Best regards,
> >>
> >>
> >> On Mar 9, 2018 12:35 AM, "Jason Dillaman" <jdill...@redhat.com> wrote:
> >>
> >> On Thu, Mar 8, 2018 at 11:59 AM, Lazuardi Nasution
> >> <mrxlazuar...@gmail.com> wrote:
> >>> Hi Mike,
> >>>
> >>> Since I have moved from LIO to TGT, I can do full ALUA (active/active)
> of
> >>> multiple gateways. Of course I have to disable any write back cache at
> >>> any
> >>> level (RBD cache and TGT cache). It seem to be safe to disable
> exclusive
> >>> lock since each RBD image is accessed only by single client and as long
> >>> as
> >>> I
> >>> know mostly ALUA use RR of I/O path.
> >>
> >> How do you figure that's safe for preventing an overwrite with old
> >> data in an active/active path hiccup?
> >>
> >>> Best regards,
> >>>
> >>> On Mar 8, 2018 11:54 PM, "Mike Christie" <mchri...@redhat.com> wrote:
> >>>>
> >>>> On 03/07/2018 09:24 AM, shadow_lin wrote:
> >>>> > Hi Christie,
> >>>> > Is it safe to use active/passive multipath with krbd with exclusive
> >>>> > lock
> >>>> > for lio/tgt/scst/tcmu?
> >>>>
> >>>> No. We tried to use lio and krbd initially, but there is a issue where
> >>>> IO might get stuck in the target/block layer and get executed after
> new
> >>>> IO. So for lio, tgt and tcmu it is not safe as is right now. We could
> >>>> add some code tcmu's file_example handler which can be used with krbd
> so
> >>>> it works like the rbd one.
> >>>>
> >>>> I do know enough about SCST right now.
> >>>>
> >>>>
> >>>> > Is it safe to use active/active multipath If use suse kernel with
> >>>> > target_core_rbd?
> >>>> > Thanks.
> >>>> >
> >>>> > 2018-03-07
> >>>> >
> >>>> >
> >>>> > ------------------------------------------------------------
> ------------
> >>>> > shadowlin
> >>>> >
> >>>> >
> >>>> >
> >>>> > ------------------------------------------------------------
> ------------
> >>>> >
> >>>> >     *发件人:*Mike Christie <mchri...@redhat.com>
> >>>> >     *发送时间:*2018-03-07 03:51
> >>>> >     *主题:*Re: [ceph-users] iSCSI Multipath (Load Balancing) vs RBD
> >>>> >     Exclusive Lock
> >>>> >     *收件人:*"Lazuardi Nasution"<mrxlazuar...@gmail.com>,"Ceph
> >>>> >     Users"<ceph-users@lists.ceph.com>
> >>>> >     *抄送:*
> >>>> >
> >>>> >     On 03/06/2018 01:17 PM, Lazuardi Nasution wrote:
> >>>> >     > Hi,
> >>>> >     >
> >>>> >     > I want to do load balanced multipathing (multiple iSCSI
> >>>> > gateway/exporter
> >>>> >     > nodes) of iSCSI backed with RBD images. Should I disable
> >>>> > exclusive
> >>>> > lock
> >>>> >     > feature? What if I don't disable that feature? I'm using TGT
> >>>> > (manual
> >>>> >     > way) since I get so many CPU stuck error messages when I was
> >>>> > using
> >>>> > LIO.
> >>>> >     >
> >>>> >
> >>>> >     You are using LIO/TGT with krbd right?
> >>>> >
> >>>> >     You cannot or shouldn't do active/active multipathing. If you
> have
> >>>> > the
> >>>> >     lock enabled then it bounces between paths for each IO and will
> be
> >>>> > slow.
> >>>> >     If you do not have it enabled then you can end up with stale IO
> >>>> >     overwriting current data.
> >>>> >
> >>>> >
> >>>> >
> >>>> >
> >>>>
> >>>
> >>> _______________________________________________
> >>> ceph-users mailing list
> >>> ceph-users@lists.ceph.com
> >>> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>>
> >>
> >>
> >>
> >> --
> >> Jason
> >>
> >>
> >>
> >
> >
> >
> > --
> > Jason
> >
> >
>
>
>
> --
> Jason
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to