Hi Mike,
I don't have experiences with RBD mounts, but see the same effect with RBD.

You can do some tuning to get better results (disable debug and so on).

As hint some values from a ceph.conf:
[osd]
     debug asok = 0/0
     debug auth = 0/0
     debug buffer = 0/0
     debug client = 0/0
     debug context = 0/0
     debug crush = 0/0
     debug filer = 0/0
     debug filestore = 0/0
     debug finisher = 0/0
     debug heartbeatmap = 0/0
     debug journal = 0/0
     debug journaler = 0/0
     debug lockdep = 0/0
     debug mds = 0/0
     debug mds balancer = 0/0
     debug mds locker = 0/0
     debug mds log = 0/0
     debug mds log expire = 0/0
     debug mds migrator = 0/0
     debug mon = 0/0
     debug monc = 0/0
     debug ms = 0/0
     debug objclass = 0/0
     debug objectcacher = 0/0
     debug objecter = 0/0
     debug optracker = 0/0
     debug osd = 0/0
     debug paxos = 0/0
     debug perfcounter = 0/0
     debug rados = 0/0
     debug rbd = 0/0
     debug rgw = 0/0
     debug throttle = 0/0
     debug timer = 0/0
     debug tp = 0/0
     filestore_op_threads = 4
     osd max backfills = 1
     osd mount options xfs =
"rw,noatime,inode64,logbufs=8,logbsize=256k,allocsize=4M"
     osd mkfs options xfs = "-f -i size=2048"
     osd recovery max active = 1
     osd_disk_thread_ioprio_class = idle
     osd_disk_thread_ioprio_priority = 7
     osd_disk_threads = 1
     osd_enable_op_tracker = false
     osd_op_num_shards = 10
     osd_op_num_threads_per_shard = 1
     osd_op_threads = 4

Udo

On 19.04.2016 11:21, Mike Miller wrote:
> Hi,
>
> RBD mount
> ceph v0.94.5
> 6 OSD with 9 HDD each
> 10 GBit/s public and private networks
> 3 MON nodes 1Gbit/s network
>
> A rbd mounted with btrfs filesystem format performs really badly when
> reading. Tried readahead in all combinations but that does not help in
> any way.
>
> Write rates are very good in excess of 600 MB/s up to 1200 MB/s,
> average 800 MB/s
> Read rates on the same mounted rbd are about 10-30 MB/s !?
>
> Of course, both writes and reads are from a single client machine with
> a single write/read command. So I am looking at single threaded
> performance.
> Actually, I was hoping to see at least 200-300 MB/s when reading, but
> I am seeing 10% of that at best.
>
> Thanks for your help.
>
> Mike
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to