On 03/06/2014 01:51 AM, Robert van Leeuwen wrote:
Hi,

We experience something similar with our Openstack Swift setup.
You can change the sysstl "vm.vfs_cache_pressure" to make sure more inodes are 
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)


I've been setting it to around 10 which helps in some cases (up to about 20% from what I've seen). I actually see the most benefit with mid-sized IOs around 128K in size. I suspect there is a curve where if the IOs are big you aren't doing that many lookups, and if the IOs are small you don't evict inodes/dentries due to buffered data. Somewhere in the middle is where it hurts more.

We also decided to go for nodes with more memory and smaller disks.
You can read about our experiences here:
http://engineering.spilgames.com/openstack-swift-lots-small-files/

Cheers,
Robert

From: ceph-users-boun...@lists.ceph.com [ceph-users-boun...@lists.ceph.com] on 
behalf of Guang Yang [yguan...@yahoo.com]
Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD 
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM

When break down the latency, we found that half of the latency (average latency 
is around 60 milliseconds via radosgw) comes from file lookup and open
(there could be a couple of disk seeks there). When looking at the file system  
cache (slabtop), we found
that around 5M dentry / inodes are cached, however, the host has around 110 
million files (and directories) in total.

I am wondering if there is any good experience within community tunning for the 
same workload, e.g. change the in ode size ? use mkfs.xfs -n size=64k option[1] 
?

_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com


_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to