Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Mark Nelson
On 04/06/2017 01:54 PM, Adam Carheden wrote: 60-80MBs/s for what sort of setup? Is that 1Gbe rather than 10Gbe? 60-80MB/s per disk, assuming fairly standard 7200RPM disks before any replication takes place and assuming journals are on SSDs with fast O_DSYNC write performance. Any network

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Adam Carheden
60-80MBs/s for what sort of setup? Is that 1Gbe rather than 10Gbe? I consistently get 80-90Mb/s bandwidth as measured by `rados bench -p rbd 10 write` run from a ceph node on a cluster with: * 3 nodes * 4 OSD/node, 600GB 15kRPM SAS disks * 1G disk controller cache write cache shared by all disks

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Mark Nelson
With filestore on XFS using SSD journals that have good O_DSYNC write performance, we typically see between 60-80MB/s per disk before replication for large object writes. This is assuming there are no other bottlenecks or things going on though (pg splitting, recovery, network issues, etc).

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Pasha
Also make sure your PGs per pool and per entire Cluster are correct... you want 50-100 PGs per OSD total, otherwise performance can be impacted. Also if the cluster is new, it might take it a little while to rebalance and be available 100%, at that point speed can be affected too. Those are a

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Stanislav Kopp
I've reduced OSDs to 12 and moved journal to ssd drives and now have "boost" with writes to ~33-35MB/s. Is it maximum without full ssd pools? Best, Stan 2017-04-06 9:34 GMT+02:00 Stanislav Kopp : > Hello, > > I'm evaluate ceph cluster, to see if you can use it for our >

Re: [ceph-users] slow perfomance: sanity check

2017-04-06 Thread Piotr Dałek
On 04/06/2017 09:34 AM, Stanislav Kopp wrote: Hello, I'm evaluate ceph cluster, to see if you can use it for our virtualization solution (proxmox). I'm using 3 nodes, running Ubuntu 16.04 with stock ceph (10.2.6), every OSD uses separate 8 TB spinning drive (XFS), MONITORs are installed on the

[ceph-users] slow perfomance: sanity check

2017-04-06 Thread Stanislav Kopp
Hello, I'm evaluate ceph cluster, to see if you can use it for our virtualization solution (proxmox). I'm using 3 nodes, running Ubuntu 16.04 with stock ceph (10.2.6), every OSD uses separate 8 TB spinning drive (XFS), MONITORs are installed on the same nodes, all nodes are connected via 10G