Hi Steve,
I'm also looking for improvements of single-thread-reads.

A little bit higher values (twice?) should be possible with your config.
I have 5 nodes with 60 4-TB hdds and got following:
rados -p test bench -b 4194304 60 seq -t 1 --no-cleanup
Total time run:        60.066934
Total reads made:     863
Read size:            4194304
Bandwidth (MB/sec):    57.469
Average Latency:       0.0695964
Max latency:           0.434677
Min latency:           0.016444

In my case I had some osds (xfs) with an high fragmentation (20%).
Changing the mount options and defragmentation help slightly.
Performance changes:
[client]
rbd cache = true
rbd cache writethrough until flush = true

[osd]                                                                           
                            

osd mount options xfs =
"rw,noatime,inode64,logbsize=256k,delaylog,allocsize=4M"                        
                                

osd_op_threads =
4                                                                               
                                       

osd_disk_threads = 4


But I expect much more speed for an single thread...

Udo

On 23.07.2014 22:13, Steve Anthony wrote:
> Ah, ok. That makes sense. With one concurrent operation I see numbers
> more in line with the read speeds I'm seeing from the filesystems on the
> rbd images.
>
> # rados -p bench bench 300 seq --no-cleanup -t 1
> Total time run:        300.114589
> Total reads made:     2795
> Read size:            4194304
> Bandwidth (MB/sec):    37.252
>
> Average Latency:       0.10737
> Max latency:           0.968115
> Min latency:           0.039754
>
> # rados -p bench bench 300 rand --no-cleanup -t 1
> Total time run:        300.164208
> Total reads made:     2996
> Read size:            4194304
> Bandwidth (MB/sec):    39.925
>
> Average Latency:       0.100183
> Max latency:           1.04772
> Min latency:           0.039584
>
> I really wish I could find my data on read speeds from a couple weeks
> ago. It's possible that they've always been in this range, but I
> remember one of my test users saturating his 1GbE link over NFS reading
> copying from the rbd client to his workstation. Of course, it's also
> possible that the data set he was using was cached in RAM when he was
> testing, masking the lower rbd speeds.
>
> It just seems counterintuitive to me that read speeds would be so much
> slower that writes at the filesystem layer in practice. With images in
> the 10-100TB range, reading data at 20-60MB/s isn't going to be
> pleasant. Can you suggest any tunables or other approaches to
> investigate to improve these speeds, or are they in line with what you'd
> expect? Thanks for your help!
>
> -Steve
>
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to