On 05/23/2012 02:03 AM, Andrey Korolyov wrote:
Hi Josh,
Can you please answer to list on this question? It is important when
someone wants to build HA KVM cluster on the rbd backend and needs to
wc cache. Thanks!
On Wed, May 23, 2012 at 10:30 AM, Josh Durgin wrote:
On 05/22/2012 11:18 PM, Ste
On 5/23/12 2:22 AM, Andrey Korolyov wrote:
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be pinned to
dedic
Am 23.05.2012 10:30, schrieb Stefan Priebe - Profihost AG:
> Am 22.05.2012 23:11, schrieb Greg Farnum:
>> On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
>>> Am 22.05.2012 22:49, schrieb Greg Farnum:
Anyway, it looks like you're just paying a synchronous write penalty
>>>
>>>
>>>
Am 22.05.2012 23:11, schrieb Greg Farnum:
> On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
>> Am 22.05.2012 22:49, schrieb Greg Farnum:
>>> Anyway, it looks like you're just paying a synchronous write penalty
>>
>>
>> What does that exactly mean? Shouldn't one threaded write to four
On 05/23/2012 01:20 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 09:19, schrieb Josh Durgin:
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD wri
Am 23.05.2012 09:19, schrieb Josh Durgin:
> On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
>> Am 23.05.2012 08:30, schrieb Josh Durgin:
>>> On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
>> So try enabling RBD writeback caching — see http://marc.info
Am 23.05.2012 09:22, schrieb Andrey Korolyov:
> Hi,
>
> For Stefan:
>
> Increasing socket memory gave me about some percents on fio tests
> inside VM(I have measured
> 'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
> What is more important, osd process, if possible, should b
On 05/23/2012 12:22 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 09:19, schrieb Josh Durgin:
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
You can use any of the rbd-specific options (like rbd_cache_max_dirty)
with qemu>= 0.15.
You can set them in a global ceph.conf file,
Hi,
For Stefan:
Increasing socket memory gave me about some percents on fio tests
inside VM(I have measured
'max-iops-until-ceph-throws-message-about-delayed-write' parameter).
What is more important, osd process, if possible, should be pinned to
dedicated core or two, and all other processes sho
Am 23.05.2012 09:19, schrieb Josh Durgin:
> On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
> You can use any of the rbd-specific options (like rbd_cache_max_dirty)
> with qemu >= 0.15.
>
> You can set them in a global ceph.conf file, or specify them on the qemu
> command line like:
>
On 05/23/2012 12:01 AM, Stefan Priebe - Profihost AG wrote:
Am 23.05.2012 08:30, schrieb Josh Durgin:
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-devel&m=133758599712768&w=2
will test tomorrow. Thanks.
Am 23.05.2012 08:30, schrieb Josh Durgin:
> On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
>> Hi,
>>
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-devel&m=133758599712768&w=2
will test tomorrow. Thanks.
>> Can we path this to the qemu-drive option
On 05/22/2012 11:18 PM, Stefan Priebe - Profihost AG wrote:
Hi,
So try enabling RBD writeback caching — see http://marc.info
/?l=ceph-devel&m=133758599712768&w=2
will test tomorrow. Thanks.
Can we path this to the qemu-drive option?
Yup, see http://article.gmane.org/gmane.comp.file-systems.c
Hi,
>> So try enabling RBD writeback caching — see http://marc.info
>> /?l=ceph-devel&m=133758599712768&w=2
>> will test tomorrow. Thanks.
Can we path this to the qemu-drive option?
Stefan
Am 22.05.2012 23:11, schrieb Greg Farnum:
> On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
>> A
On Tuesday, May 22, 2012 at 2:00 PM, Stefan Priebe wrote:
> Am 22.05.2012 22:49, schrieb Greg Farnum:
> > Anyway, it looks like you're just paying a synchronous write penalty
>
>
> What does that exactly mean? Shouldn't one threaded write to four
> 260MB/s devices gives at least 100Mb/s?
Wel
Am 22.05.2012 22:49, schrieb Greg Farnum:
Anyway, it looks like you're just paying a synchronous write penalty
What does that exactly mean? Shouldn't one threaded write to four
260MB/s devices gives at least 100Mb/s?
since with 1 write at a time you're getting 30-40MB/s out of rados bench, bu
Am 22.05.2012 22:48, schrieb Mark Nelson:
Can you use something like iostat or collectl to check and see if the
write throughput to each SSD is roughly equal during your tests?
It is but just around 20-40MB/s. But they can write 260MB/s with
sequential writes.
> Also, what FS are you using and
On Tuesday, May 22, 2012 at 1:30 PM, Stefan Priebe wrote:
> Am 22.05.2012 21:52, schrieb Greg Farnum:
> > On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
> > Huh. That's less than I would expect. Especially since it ought to be going
> > through the page cache.
> > What version of RBD i
On 05/22/2012 03:30 PM, Stefan Priebe wrote:
Am 22.05.2012 21:52, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
Huh. That's less than I would expect. Especially since it ought to be
going through the page cache.
What version of RBD is KVM using here?
v0.47.1
Am 22.05.2012 21:52, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
Huh. That's less than I would expect. Especially since it ought to be going
through the page cache.
What version of RBD is KVM using here?
v0.47.1
Can you (from the KVM host) run
"rados -p dat
Am 22.05.2012 21:52, schrieb Greg Farnum:
On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
Huh. That's less than I would expect. Especially since it ought to be going
through the page cache.
What version of RBD is KVM using here?
v0.47.1
Can you (from the KVM host) run
"rados -p dat
On Tuesday, May 22, 2012 at 12:40 PM, Stefan Priebe wrote:
> Am 22.05.2012 21:35, schrieb Greg Farnum:
> > What does your test look like? With multiple large IOs in flight we can
> > regularly fill up a 1GbE link on our test clusters. With smaller or fewer
> > IOs in flight performance degrades a
Am 22.05.2012 21:35, schrieb Greg Farnum:
What does your test look like? With multiple large IOs in flight we can
regularly fill up a 1GbE link on our test clusters. With smaller or fewer IOs
in flight performance degrades accordingly.
iperf shows 950Mbit/s so this is OK (from KVM host to OSD
What does your test look like? With multiple large IOs in flight we can
regularly fill up a 1GbE link on our test clusters. With smaller or fewer IOs
in flight performance degrades accordingly.
On Tuesday, May 22, 2012 at 5:45 AM, Stefan Priebe - Profihost AG wrote:
> Hi list,
>
> my ceph bl
Am 22.05.2012 16:52, schrieb Andrey Korolyov:
Hi,
I`ve run in almost same problem about two months ago, and there is a
couple of corner cases: near-default tcp parameters, small journal
size, disks that are not backed by controller with NVRAM cache and
high load on osd` cpu caused by side proces
Hi,
I`ve run in almost same problem about two months ago, and there is a
couple of corner cases: near-default tcp parameters, small journal
size, disks that are not backed by controller with NVRAM cache and
high load on osd` cpu caused by side processes. Finally, I have able
to achieve 115Mb/s for
26 matches
Mail list logo