On Fri, 31 Aug 2012, Dietmar Maurer wrote:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
By putting the journal in a tmpfs t
Sorry Dieter,
Not trying to say "you are wrong" or anything like that - just trying to
add to the problem solving body of knowledge that from what *I* have
tried out the 'sync' issue does not look to be the bad guy here - altho
more analysis is always welcome (usual story - my findings should
Mark, Inktank,
OK, it is very likely that 'sync_file_range' is not the major slowdown
'culprit'.
But, which areas (design, current implementation, protocol, interconnect,
tuning parameter, ...)
would you rate as 'major slowdown effect(s)' ?
Best Regards,
-Dieter
On Fri, Aug 31, 2012 at 08:48
On 31/08/12 20:11, Dietmar Maurer wrote:
RBD waits for the data to be on disk on all replicas. It's pretty easy
to relax this to in memory on all replicas, but there's no option for
that right now.
I thought that is dangerous, because you can loose data?
N�r��y���b�X��ǧv�^�){.n�+���z�]z�{ay
>>RBD waits for the data to be on disk on all replicas. It's pretty easy
>>to relax this to in memory on all replicas, but there's no option for
>>that right now.
I thought that is dangerous, because you can loose data?
quot;Josh Durgin"
À: "Alexandre DERUMIER"
Cc: "Dieter Kasper" , ceph-devel@vger.kernel.org,
"Andreas Bluemle"
Envoyé: Jeudi 30 Août 2012 18:16:47
Objet: Re: RBD performance - tuning hints
On 08/30/2012 09:12 AM, Alexandre DERUMIER wrote:
>>> wel
t;
>> >
>> > On Wed, Aug 29, 2012 at 07:37:36PM +0200, Josh Durgin wrote:
>> >> On 08/29/2012 01:50 AM, Alexandre DERUMIER wrote:
>> >> > Nice results !
>> >> > (can you make same benchmark from a qemu-kvm guest with virtio-driver ?
>&g
ouldn't slow anything down, of course).
:)
-Greg
>
> In such a configuration only the Ceph-Code and the Interconnect (10GbE/IP)
> would be the brakeman.
>
> Cheers,
> -Dieter
>
>
>>
>>
>> - Mail original -
>>
>> De: "Dieter Kasper&quo
t; - Mail original -
>
> De: "Dieter Kasper"
> À: "Alexandre DERUMIER"
> Cc: ceph-devel@vger.kernel.org, "Andreas Bluemle"
>
> Envoyé: Jeudi 30 Août 2012 18:02:05
> Objet: Re: RBD performance - tuning hints
>
> On Th
on all replicas, but there's no option for
that right now.
Josh
- Mail original -
De: "Dieter Kasper"
À: "Alexandre DERUMIER"
Cc: ceph-devel@vger.kernel.org, "Andreas Bluemle"
Envoyé: Jeudi 30 Août 2012 18:02:05
Objet: Re: RBD performance - tuning h
rnel.org, "Andreas Bluemle"
Envoyé: Jeudi 30 Août 2012 18:02:05
Objet: Re: RBD performance - tuning hints
On Thu, Aug 30, 2012 at 05:46:35PM +0200, Alexandre DERUMIER wrote:
> Thanks
>
> >> 8x SSD, 200GB each
>
> 2 iops seem pretty low,no ?
well, you have
ench in the mailing list)
>
>
> - Mail original -
>
> De: "Dieter Kasper"
> À: "Alexandre DERUMIER"
> Cc: ceph-devel@vger.kernel.org
> Envoyé: Jeudi 30 Août 2012 17:33:42
> Objet: Re: RBD performance - tuning hints
>
> On Thu
throughput bench in the mailing list)
- Mail original -
De: "Dieter Kasper"
À: "Alexandre DERUMIER"
Cc: ceph-devel@vger.kernel.org
Envoyé: Jeudi 30 Août 2012 17:33:42
Objet: Re: RBD performance - tuning hints
On Thu, Aug 30, 2012 at 05:28:02PM +0200, Alexandr
Mail original -
>
> De: "Dieter Kasper"
> À: "Alexandre DERUMIER"
> Cc: ceph-devel@vger.kernel.org
> Envoyé: Jeudi 30 Août 2012 16:56:34
> Objet: Re: RBD performance - tuning hints
>
> Hi Alexandre,
>
> with the 4 filestore para
Thanks for the report !
vs your first benchmark, it's with RBD 4M or 64K ?
(how much ssd by node?)
- Mail original -
De: "Dieter Kasper"
À: "Alexandre DERUMIER"
Cc: ceph-devel@vger.kernel.org
Envoyé: Jeudi 30 Août 2012 16:56:34
Objet: Re: RBD perfor
nd we never
> >> > be able to have more than 2iops, with a full ssd 3nodes cluster)
> >> >
> >> >>> How can I set the variables when the Journal data have go to the OSD ?
> >> >>> (after X seconds and/or when Y %-full)
> >> > I think you can
une these values
>
> filestore max sync interval = 30
> filestore min sync interval = 29
> filestore flusher = false
> filestore queue max ops = 1
>
>
>
> - Mail original -----
>
> De: "Dieter Kasper"
> À: ceph-devel@vger.kernel.org
> Cc: "
ilestore min sync interval = 29
>> > filestore flusher = false
>> > filestore queue max ops = 1
>>
>> Increasing filestore_op_threads might help as well.
>>
>> > - Mail original -
>> >
>> > De: "Dieter Kasper"
>> > À: c
er = false
> > filestore queue max ops = 1
>
> Increasing filestore_op_threads might help as well.
>
> > - Mail original -
> >
> > De: "Dieter Kasper"
> > À: ceph-devel@vger.kernel.org
> > Cc: "Dieter Kasper (KD)"
> &
might help as well.
- Mail original -
De: "Dieter Kasper"
À: ceph-devel@vger.kernel.org
Cc: "Dieter Kasper (KD)"
Envoyé: Mardi 28 Août 2012 19:48:42
Objet: RBD performance - tuning hints
Hi,
on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
I can o
-devel@vger.kernel.org
Cc: "Dieter Kasper (KD)"
Envoyé: Mardi 28 Août 2012 19:48:42
Objet: RBD performance - tuning hints
Hi,
on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
I can observe a pretty nice rados bench performance
(see bench-rados.txt for details):
On Tue, Aug 28, 2012 at 08:53:46PM +0200, Smart Weblications GmbH - Florian
Wiessner wrote:
> Am 28.08.2012 19:48, schrieb Dieter Kasper:
> > Hi,
> >
> > on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
> > I can observe a pretty nice rados bench performance
> > (see bench-rad
Am 28.08.2012 19:48, schrieb Dieter Kasper:
> Hi,
>
> on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
> I can observe a pretty nice rados bench performance
> (see bench-rados.txt for details):
i'd like to know which 10GE Switch you have used? Do you use 10GE-Base-T?
--
Hi,
on my 4-node system (SSD + 10GbE, see bench-config.txt for details)
I can observe a pretty nice rados bench performance
(see bench-rados.txt for details):
Bandwidth (MB/sec): 961.710
Max bandwidth (MB/sec): 1040
Min bandwidth (MB/sec): 772
Also the bandwidth performance generated with
24 matches
Mail list logo