Hi Bruce,
you can also look on the mon, like
ceph --admin-daemon /var/run/ceph/ceph-mon.b.asok config show | grep cache

(I guess you have an number instead of the .b. )

Udo
On 30.01.2015 22:02, Bruce McFarland wrote:
>
> The ceph daemon isn’t running on the client with the rbd device so I
> can’t verify if it’s disabled at the librbd level on the client. If
> you mean on the storage nodes I’ve had some issues dumping the config.
> Does the rbd caching occur on the storage nodes, client, or both?
>
>  
>
>  
>
> *From:*Udo Lembke [mailto:ulem...@polarzone.de]
> *Sent:* Friday, January 30, 2015 1:00 PM
> *To:* Bruce McFarland; ceph-us...@ceph.com
> *Cc:* Prashanth Nednoor
> *Subject:* Re: [ceph-users] RBD caching on 4K reads???
>
>  
>
> Hi Bruce,
> hmm, sounds for me like the rbd cache.
> Can you look, if the cache is realy disabled in the running config with
>
> ceph --admin-daemon /var/run/ceph/ceph-osd.0.asok config show | grep cache
>
> Udo
>
> On 30.01.2015 21:51, Bruce McFarland wrote:
>
>     I have a cluster and have created a rbd device - /dev/rbd1. It
>     shows up as expected with ‘rbd –image test info’ and rbd
>     showmapped. I have been looking at cluster performance with the
>     usual Linux block device tools – fio and vdbench. When I look at
>     writes and large block sequential reads I’m seeing what I’d expect
>     with performance limited by either my cluster interconnect
>     bandwidth or the backend device throughput speeds – 1 GE frontend
>     and cluster network and 7200rpm SATA OSDs with 1 SSD/osd for
>     journal. Everything looks good EXCEPT 4K random reads. There is
>     caching occurring somewhere in my system that I haven’t been able
>     to detect and suppress - yet.
>
>      
>
>     I’ve set ‘rbd_cache=false’ in the [client] section of ceph.conf on
>     the client, monitor, and storage nodes. I’ve flushed the system
>     caches on the client and storage nodes before test run ie
>     vm.drop_caches=3 and set the huge pages to the maximum available
>     to consume free system memory so that it can’t be used for system
>     cache . I’ve also disabled read-ahead on all of the HDD/OSDs.
>
>      
>
>     When I run a 4k randon read workload on the client the most I
>     could expect would be ~100iops/osd x number of osd’s – I’m seeing
>     an order of magnitude greater than that AND running IOSTAT on the
>     storage nodes show no read activity on the OSD disks.
>
>      
>
>     Any ideas on what I’ve overlooked? There appears to be some
>     read-ahead caching that I’ve missed.
>
>      
>
>     Thanks,
>
>     Bruce
>
>
>
>
>     _______________________________________________
>
>     ceph-users mailing list
>
>     ceph-users@lists.ceph.com <mailto:ceph-users@lists.ceph.com>
>
>     http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
>  
>

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to