Hi,

I`ve run in almost same problem about two months ago, and there is a
couple of corner cases: near-default tcp parameters, small journal
size, disks that are not backed by controller with NVRAM cache and
high load on osd` cpu caused by side processes. Finally, I have able
to achieve 115Mb/s for large linear writes on raw rbd block inside VM
with journal on tmpfs and osds on RAID0 built on top of three sata
disks.

On Tue, May 22, 2012 at 4:45 PM, Stefan Priebe - Profihost AG
<s.pri...@profihost.ag> wrote:
> Hi list,
>
> my ceph block testcluster is now running fine.
>
> Setup:
> 4x ceph servers
>  - 3x mon with /mon on local os SATA disk
>  - 4x OSD with /journal on tmpfs and /srv on intel ssd
>
> all of them use 2x 1Gbit/s lacp trunk.
>
> 1x KVM Host system (2x 1Gbit/s lacp trunk)
>
> With one KVM i do not get more than 40MB/s and my network link is just
> at 40% of 1Gbit/s.
>
> Is this expected? If not where can i start searching / debugging?
>
> Thanks,
> Stefan
> --
> To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
> the body of a message to majord...@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to