On 03/06/2014 01:51 AM, Robert van Leeuwen wrote:
Hi,
We experience something similar with our Openstack Swift setup.
You can change the sysstl vm.vfs_cache_pressure to make sure more inodes are
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)
Le 05/03/2014 15:34, Guang Yang a écrit :
Hello all,
Hellon
Recently I am working on Ceph performance analysis on our cluster, our
OSD hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency
(average latency is
Hello all,
Recently I am working on Ceph performance analysis on our cluster, our OSD
hardware looks like:
11 SATA disks, 4TB for each, 7200RPM
48GB RAM
When break down the latency, we found that half of the latency (average latency
is around 60 milliseconds via radosgw) comes from file
Hi,
We experience something similar with our Openstack Swift setup.
You can change the sysstl vm.vfs_cache_pressure to make sure more inodes are
being kept in cache.
(Do not set this to 0 because you will trigger the OOM killer at some point ;)
We also decided to go for nodes with more memory