Hi,

        When we rebuilt our ceph cluster, we opted to make our rbd storage 
replication level 3 rather than the previously
configured replication level 2.

        Things are MUCH slower (5 nodes, 13 osd's) than before even though most 
of our I/O is read.   Is this to be expected?
What are the recommended ways of seeing who/what is consuming the largest 
amount of disk/network bandwidth?

Thanks!
        Jeff

-- 
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to