he socket ... and as we have already discussed, it's closing the
socket due to the IO timeout being hit ... and it's hitting the IO
timeout due to a deadlock due to memory pressure from rbd-nbd causing
IO to pushed from the XFS cache back down into rbd-nbd.
> Am 10.09.19 um 16:10 schri
chers" command:
$ rados -p listwatchers rbd_mirroring
watcher=1.2.3.4:0/199388543 client.4154 cookie=94769010788992
watcher=1.2.3.4:0/199388543 client.4154 cookie=94769061031424
In my case, the "4154" from "client.4154" is the unique global id for
my connection to the clust
On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
wrote:
>
> Dear Jason,
>
> thanks for the very detailed explanation! This was very instructive.
> Sadly, the watchers look correct - see details inline.
>
> Am 13.09.19 um 15:02 schrieb Jason Dillaman:
> > On Thu, Sep
On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
>
> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> wrote:
> >
> > Dear Jason,
> >
> > thanks for the very detailed explanation! This was very instructive.
> > Sadly, the watchers look correct -
On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:17 AM Jason Dillaman wrote:
> >>
> >> On Fri, Sep 13, 2019 at 10:02 AM Oliver Freyermuth
> >> wrote:
> >>>
On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 17:18 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 10:41 AM Oliver Freyermuth
> > wrote:
> >>
> >> Am 13.09.19 um 16:30 schrieb Jason Dillaman:
> >>> On Fri, Sep 1
er session and not the status updates. I've opened a
tracker ticker for this issue [1].
Thanks.
On Fri, Sep 13, 2019 at 12:44 PM Oliver Freyermuth
wrote:
>
> Am 13.09.19 um 18:38 schrieb Jason Dillaman:
> > On Fri, Sep 13, 2019 at 11:30 AM Oliver Freyermuth
> > wrote:
>
On Fri, Sep 27, 2019 at 5:18 AM Matthias Leopold
wrote:
>
>
> Hi,
>
> I was positively surprised to to see ceph-iscsi-3.3 available today.
> Unfortunately there's an error when trying to install it from yum repo:
>
> ceph-iscsi-3.3-1.el7.noarch.rp FAILED
> 100%
> [=
On Wed, Oct 2, 2019 at 9:50 AM Kilian Ries wrote:
>
> Hi,
>
>
> i'm running a ceph mimic cluster with 4x ISCSI gateway nodes. Cluster was
> setup via ceph-ansible v3.2-stable. I just checked my nodes and saw that only
> two of the four configured iscsi gw nodes are working correct. I first
> no
On Wed, Oct 16, 2019 at 2:35 AM 展荣臻(信泰) wrote:
>
> hi,all
> we deploy ceph with ceph-ansible.osds,mons and daemons of iscsi runs in
> docker.
> I create iscsi target according to
> https://docs.ceph.com/docs/luminous/rbd/iscsi-target-cli/.
> I discovered and logined iscsi target on another
On Wed, Oct 16, 2019 at 9:52 PM 展荣臻(信泰) wrote:
>
>
>
>
> > -原始邮件-----
> > 发件人: "Jason Dillaman"
> > 发送时间: 2019-10-16 20:33:47 (星期三)
> > 收件人: "展荣臻(信泰)"
> > 抄送: ceph-users
> > 主题: Re: [ceph-users] ceph iscsi question
> &
Have you updated your "/etc/multipath.conf" as documented here [1]?
You should have ALUA configured but it doesn't appear that's the case
w/ your provided output.
On Wed, Oct 16, 2019 at 11:36 PM 展荣臻(信泰) wrote:
>
>
>
>
> > -原始邮件-
> > 发件人: &qu
On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote:
>
> Hi,
> I am trying to increase size of a datastore made available through ceph iscsi
> rbd
> The steps I followed are depicted below
> Basically gwcli report correct data and even VMware device capacity is
> correct but when tried to inc
> [rep01 (7T)]
Did you rescan the LUNs in VMware after this latest resize attempt?
What kernel and tcmu-runner version are you using?
> On Fri, 25 Oct 2019 at 09:24, Jason Dillaman wrote:
>>
>> On Fri, Oct 25, 2019 at 9:13 AM Steven Vacaroaia wrote:
>> >
On Tue, Nov 19, 2019 at 1:51 PM shubjero wrote:
>
> Florian,
>
> Thanks for posting about this issue. This is something that we have
> been experiencing (stale exclusive locks) with our OpenStack and Ceph
> cloud more frequently as our datacentre has had some reliability
> issues recently with pow
On Tue, Nov 19, 2019 at 2:49 PM Florian Haas wrote:
>
> On 19/11/2019 20:03, Jason Dillaman wrote:
> > On Tue, Nov 19, 2019 at 1:51 PM shubjero wrote:
> >>
> >> Florian,
> >>
> >> Thanks for posting about this issue. This is something that we ha
On Tue, Nov 19, 2019 at 4:09 PM Florian Haas wrote:
>
> On 19/11/2019 21:32, Jason Dillaman wrote:
> >> What, exactly, is the "reasonably configured hypervisor" here, in other
> >> words, what is it that grabs and releases this lock? It's evidently not
>
On Tue, Nov 19, 2019 at 4:31 PM Florian Haas wrote:
>
> On 19/11/2019 22:19, Jason Dillaman wrote:
> > On Tue, Nov 19, 2019 at 4:09 PM Florian Haas wrote:
> >>
> >> On 19/11/2019 21:32, Jason Dillaman wrote:
> >>>> What, exactly, is the &quo
On Tue, Nov 19, 2019 at 4:42 PM Florian Haas wrote:
>
> On 19/11/2019 22:34, Jason Dillaman wrote:
> >> Oh totally, I wasn't arguing it was a bad idea for it to do what it
> >> does! I just got confused by the fact that our mon logs showed what
> >> looked l
On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote:
>
> Hi all,
>
>
>
> We have a 200TB RBD image which we are replicating using RBD mirroring.
>
> We want to test the DR copy and make sure that we have a consistent copy in
> case primary site is lost.
>
>
>
> We did it previously and promoted the
way.
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Thursday, November 21, 2019 8:33 AM
> To: Vikas Rana
> Cc: ceph-users
> Subject: Re: [ceph-users] RBD Mirror DR Testing
>
> On Thu, Nov 21, 2019 at 8:29 AM Vikas Rana wrote:
&g
On Thu, Nov 21, 2019 at 9:56 AM Jason Dillaman wrote:
>
> On Thu, Nov 21, 2019 at 8:49 AM Vikas Rana wrote:
> >
> > Thanks Jason for such a quick response. We are on 12.2.10.
> >
> > Checksuming a 200TB image will take a long time.
>
> How would mounting an
unt: /mnt: WARNING: device write-protected, mounted read-only.
$ ll /mnt/
total 0
-rw-r--r--. 1 root root 0 Nov 21 10:20 hello.world
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: Thursday, November 21, 2019 9:58 AM
> To: Vikas Rana
> Cc: ce
ev/nbd0: can't read superblock
Doesn't look like you are mapping at a snapshot.
>
> Any suggestions to test the DR copy any other way or if I'm doing something
> wrong?
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
> Sent: T
the UNIX domain socket to '/var/run/ceph/cephdr-client.admin.asok': (17)
> File exists
>
>
>
> Did we missed anything and why the snapshot didn't replicated to DR side?
>
> Thanks,
> -Vikas
>
> -Original Message-
> From: Jason Dillaman
>
801 - 825 of 825 matches
Mail list logo