Re: [ceph-users] XFS tunning on OSD

2014-03-06 Thread Mark Nelson
On 03/06/2014 01:51 AM, Robert van Leeuwen wrote: Hi, We experience something similar with our Openstack Swift setup. You can change the sysstl vm.vfs_cache_pressure to make sure more inodes are being kept in cache. (Do not set this to 0 because you will trigger the OOM killer at some point ;)

Re: [ceph-users] XFS tunning on OSD

2014-03-06 Thread Yann Dupont - Veille Techno
Le 05/03/2014 15:34, Guang Yang a écrit : Hello all, Hellon Recently I am working on Ceph performance analysis on our cluster, our OSD hardware looks like: 11 SATA disks, 4TB for each, 7200RPM 48GB RAM When break down the latency, we found that half of the latency (average latency is

[ceph-users] XFS tunning on OSD

2014-03-05 Thread Guang Yang
Hello all, Recently I am working on Ceph performance analysis on our cluster, our OSD hardware looks like: 11 SATA disks, 4TB for each, 7200RPM 48GB RAM When break down the latency, we found that half of the latency (average latency is around 60 milliseconds via radosgw) comes from file

Re: [ceph-users] XFS tunning on OSD

2014-03-05 Thread Robert van Leeuwen
Hi, We experience something similar with our Openstack Swift setup. You can change the sysstl vm.vfs_cache_pressure to make sure more inodes are being kept in cache. (Do not set this to 0 because you will trigger the OOM killer at some point ;) We also decided to go for nodes with more memory