n Balzer
> <ch...@gol.com>
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> HI Nick,
>
>
> On Fri, Jul 1, 2016 at 2:11 PM, Nick Fisk <n...@fisk.me.uk> wrote:
>
>
>
> > However, there are a number of pain
example ~30ms is still a bit high. I wonder if
> the default queue depths on your iSCSI target are too low as well?
>
> Nick
>
>> -Original Message-
>> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
>> Oliver Dzombic
>> S
> -Original Message-
> From: ceph-users [mailto:ceph-users-boun...@lists.ceph.com] On Behalf Of
> Lars Marowsky-Bree
> Sent: 04 July 2016 11:36
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
On 2016-07-01T19:11:34, Nick Fisk wrote:
> To summarise,
>
> LIO is just not working very well at the moment because of the ABORT Tasks
> problem, this will hopefully be fixed at some point. I'm not sure if SUSE
> works around this, but see below for other pain points with
On 2016-07-01T17:18:19, Christian Balzer wrote:
> First off, it's somewhat funny that you're testing the repackaged SUSE
> Ceph, but asking for help here (with Ceph being owned by Red Hat).
*cough* Ceph is not owned by RH. RH acquired the InkTank team and the
various trademarks,
> -Original Message-
> From: mq [mailto:maoqi1...@126.com]
> Sent: 04 July 2016 08:13
> To: Nick Fisk <n...@fisk.me.uk>
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> Hi Nick
> i have test NFS: since NFS
.ceph.com] On Behalf Of
> Oliver Dzombic
> Sent: 01 July 2016 09:27
> To: ceph-users@lists.ceph.com
> Subject: Re: [ceph-users]
> suse_enterprise_storage3_rbd_LIO_vmware_performance_bad
>
> Hi,
>
> my experience:
>
> ceph + iscsi ( multipath ) + vmware == worst
&
HI
1.
2 sw iscsi gateways(deploy on osd/monitor ) using lrbd to create,the iscsi
target is LIO
configuration:
{
"auth": [
{
"target": "iqn.2016-07.org.linux-iscsi.iscsi.x86:testvol",
"authentication": "none"
}
],
"targets": [
{
"target":
Hi,
my experience:
ceph + iscsi ( multipath ) + vmware == worst
Better you search for another solution.
vmware + nfs + vmware might have a much better performance.
If you are able to get vmware run with iscsi and ceph, i would be
>>very<< intrested in what/how you did that.
--
Mit
Hello,
On Fri, 1 Jul 2016 13:04:45 +0800 mq wrote:
> Hi list
> I have tested suse enterprise storage3 using 2 iscsi gateway attached
> to vmware. The performance is bad.
First off, it's somewhat funny that you're testing the repackaged SUSE
Ceph, but asking for help here (with Ceph being
Hi list
I have tested suse enterprise storage3 using 2 iscsi gateway attached to
vmware. The performance is bad. I have turn off VAAI following the
(https://kb.vmware.com/selfservice/microsites/search.do?language=en_US=displayKC=1033665)
11 matches
Mail list logo