On Mon, 2017-04-10 at 12:13 -0500, Mike Christie wrote:
>
> > LIO-TCMU+librbd-iscsi [1] [2] looks really promising and seams to
> > be the
> > way to go. It would be great if somebody as insight about the
> > maturity
> > of the project, is it ready for testing purposes ?
> >
>
> It is not
On 04/10/2017 01:21 PM, Timofey Titovets wrote:
> JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and
> this pass stress tests.
>
> 1. Don't try pass RBD directly to LIO, this setup are unstable
> 2. Instead of that, use Qemu + KVM (i use proxmox for that create VM)
> 3. Attach
JFYI: Today we get totaly stable setup Ceph + ESXi "without hacks" and
this pass stress tests.
1. Don't try pass RBD directly to LIO, this setup are unstable
2. Instead of that, use Qemu + KVM (i use proxmox for that create VM)
3. Attach RBD to VM as VIRTIO-SCSI disk (must be exported by
On 04/06/2017 08:46 AM, David Disseldorp wrote:
> On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote:
> ...
>>> I'm not to sure what you're referring to WRT the spiral of death, but we did
>>> patch some LIO issues encountered when a command was aborted while
>>> outstanding at the LIO backstore
On 04/06/2017 03:22 AM, yipik...@gmail.com wrote:
> On 06/04/2017 09:42, Nick Fisk wrote:
>>
>> I assume Brady is referring to the death spiral LIO gets into with
>> some initiators, including vmware, if an IO takes longer than about
>> 10s. I haven’t heard of anything, and can’t see any changes,
On Thu, 6 Apr 2017 14:27:01 +0100, Nick Fisk wrote:
...
> > I'm not to sure what you're referring to WRT the spiral of death, but we did
> > patch some LIO issues encountered when a command was aborted while
> > outstanding at the LIO backstore layer.
> > These specific fixes are carried in the
, April 06, 2017 3:21 PM
To: ceph-users
Subject: Re: [ceph-users] rbd iscsi gateway question
I appreciate everybody's responses here. I remember the announcement of Petasan
a whole back on here and some concerns about it.
Is anybody using it in production yet?
On Apr 5, 2017 9:58 PM, "
> -Original Message-
> From: David Disseldorp [mailto:dd...@suse.de]
> Sent: 06 April 2017 14:06
> To: Nick Fisk <n...@fisk.me.uk>
> Cc: 'Maged Mokhtar' <mmokh...@petasan.org>; 'Brady Deetz'
> <bde...@gmail.com>; 'ceph-users' <ceph-us...@ceph.com
I appreciate everybody's responses here. I remember the announcement of
Petasan a whole back on here and some concerns about it.
Is anybody using it in production yet?
On Apr 5, 2017 9:58 PM, "Brady Deetz" wrote:
> I apologize if this is a duplicate of something recent, but
Hi,
On Thu, 6 Apr 2017 13:31:00 +0100, Nick Fisk wrote:
> > I believe there
> > was a request to include it mainstream kernel but it did not happen,
> > probably waiting for TCMU solution which will be better/cleaner design.
Indeed, we're proceeding with TCMU as a future upstream acceptable
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Maged Mokhtar
> Sent: 06 April 2017 12:21
> To: Brady Deetz <bde...@gmail.com>; ceph-users <ceph-us...@ceph.com>
> Subject: Re: [ceph-users] rbd iscsi gat
The io hang (it is actually a pause not hang) is done by Ceph only in case
of a simultaneous failure of 2 hosts or 2 osds on separate hosts. A single
host/osd being out will not cause this. In PetaSAN project www.petasan.org
we use LIO/krbd. We have done a lot of tests on VMWare, in case of io
> On 6 Apr 2017, at 08:42, Nick Fisk wrote:
>
> I assume Brady is referring to the death spiral LIO gets into with some
> initiators, including vmware, if an IO takes longer than about 10s.
We have occasionally seen this issue with vmware+LIO, almost always when
upgrading
ding a reboot
> * RBD Snapshot deletion – disk latency through roof, cluster
> unresponsive for minutes at a time, won’t do again.
>
>
>
>
>
>
>
> *From:*ceph-users [mailto:ceph-users-boun...@lists.ceph.com] *On
> Behalf Of *Brady Deetz
> *Sent:* Thursd
for
krbd.
From: Nick Fisk [mailto:n...@fisk.me.uk]
Sent: Thursday, 6 April 2017 5:43 PM
To: Adrian Saul; 'Brady Deetz'; 'ceph-users'
Subject: RE: [ceph-users] rbd iscsi gateway question
I assume Brady is referring to the death spiral LIO gets into with some
initiators, including vmware
.
From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of Adrian
Saul
Sent: 06 April 2017 05:32
To: Brady Deetz <bde...@gmail.com>; ceph-users <ceph-us...@ceph.com>
Subject: Re: [ceph-users] rbd iscsi gateway question
I am not sure if there is a hard and fast rule y
...@lists.ceph.com] On Behalf Of Brady
Deetz
Sent: Thursday, 6 April 2017 12:58 PM
To: ceph-users
Subject: [ceph-users] rbd iscsi gateway question
I apologize if this is a duplicate of something recent, but I'm not finding
much. Does the issue still exist where dropping an OSD results in a LUN's I/O
I apologize if this is a duplicate of something recent, but I'm not finding
much. Does the issue still exist where dropping an OSD results in a LUN's
I/O hanging?
I'm attempting to determine if I have to move off of VMWare in order to
safely use Ceph as my VM storage.
18 matches
Mail list logo